Mar 13 12:36:35.726485 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:36:36.331493 master-0 kubenswrapper[3989]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:36:36.332855 master-0 kubenswrapper[3989]: I0313 12:36:36.332638 3989 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.337968 3989 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338008 3989 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338015 3989 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338021 3989 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338027 3989 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338035 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338040 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:36:36.338030 master-0 kubenswrapper[3989]: W0313 12:36:36.338046 3989 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338053 3989 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338058 3989 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338065 3989 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338071 3989 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338112 3989 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338120 3989 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338126 3989 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338131 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338136 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338141 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338145 3989 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338150 3989 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338155 3989 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338160 3989 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338164 3989 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338169 3989 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338174 3989 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338179 3989 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:36:36.338315 master-0 kubenswrapper[3989]: W0313 12:36:36.338209 3989 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338213 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338218 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338223 3989 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338227 3989 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338233 3989 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338237 3989 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338241 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338246 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338251 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338258 3989 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338264 3989 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338271 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338276 3989 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338282 3989 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338287 3989 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338292 3989 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338297 3989 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338301 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:36:36.338861 master-0 kubenswrapper[3989]: W0313 12:36:36.338307 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338313 3989 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338319 3989 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338324 3989 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338329 3989 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338335 3989 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338340 3989 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338345 3989 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338350 3989 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338355 3989 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338361 3989 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338366 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338370 3989 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338375 3989 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338379 3989 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338385 3989 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338390 3989 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338395 3989 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338400 3989 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338405 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:36:36.339480 master-0 kubenswrapper[3989]: W0313 12:36:36.338409 3989 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338414 3989 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338419 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338423 3989 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338428 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338432 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: W0313 12:36:36.338437 3989 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339934 3989 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339958 3989 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339971 3989 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339979 3989 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339988 3989 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.339994 3989 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340002 3989 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340009 3989 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340015 3989 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340021 3989 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340028 3989 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340034 3989 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340039 3989 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340045 3989 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340052 3989 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340058 3989 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340064 3989 flags.go:64] FLAG: --cloud-config="" Mar 13 12:36:36.340047 master-0 kubenswrapper[3989]: I0313 12:36:36.340070 3989 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340075 3989 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340082 3989 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340088 3989 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340094 3989 flags.go:64] FLAG: --config-dir="" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340099 3989 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340105 3989 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340113 3989 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340119 3989 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340125 3989 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340131 3989 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340137 3989 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340144 3989 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340150 3989 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340156 3989 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340161 3989 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340169 3989 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340174 3989 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340180 3989 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340193 3989 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340199 3989 flags.go:64] FLAG: --enable-server="true" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340205 3989 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340215 3989 flags.go:64] FLAG: --event-burst="100" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340222 3989 flags.go:64] FLAG: --event-qps="50" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340228 3989 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:36:36.340870 master-0 kubenswrapper[3989]: I0313 12:36:36.340233 3989 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340238 3989 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340245 3989 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340250 3989 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340256 3989 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340262 3989 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340277 3989 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340283 3989 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340289 3989 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340302 3989 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340308 3989 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340313 3989 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340318 3989 flags.go:64] FLAG: --feature-gates="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340323 3989 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340329 3989 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340335 3989 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340341 3989 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340347 3989 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340352 3989 flags.go:64] FLAG: --help="false" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340358 3989 flags.go:64] FLAG: --hostname-override="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340363 3989 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340367 3989 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340372 3989 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340377 3989 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340381 3989 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:36:36.341644 master-0 kubenswrapper[3989]: I0313 12:36:36.340386 3989 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340390 3989 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340394 3989 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340398 3989 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340403 3989 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340408 3989 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340413 3989 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340417 3989 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340421 3989 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340426 3989 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340430 3989 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340434 3989 flags.go:64] FLAG: --lock-file="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340438 3989 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340443 3989 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340447 3989 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340454 3989 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340462 3989 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340467 3989 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340471 3989 flags.go:64] FLAG: --logging-format="text" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340475 3989 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340480 3989 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340484 3989 flags.go:64] FLAG: --manifest-url="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340489 3989 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340495 3989 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340499 3989 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:36:36.342351 master-0 kubenswrapper[3989]: I0313 12:36:36.340505 3989 flags.go:64] FLAG: --max-pods="110" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340509 3989 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340514 3989 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340518 3989 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340522 3989 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340526 3989 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340531 3989 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340536 3989 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340548 3989 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340553 3989 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340557 3989 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340562 3989 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340566 3989 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340589 3989 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340594 3989 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340598 3989 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340603 3989 flags.go:64] FLAG: --port="10250" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340607 3989 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340611 3989 flags.go:64] FLAG: --provider-id="" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340616 3989 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340620 3989 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340625 3989 flags.go:64] FLAG: --register-node="true" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340629 3989 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:36:36.343074 master-0 kubenswrapper[3989]: I0313 12:36:36.340653 3989 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340691 3989 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340697 3989 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340703 3989 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340707 3989 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340716 3989 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340731 3989 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340737 3989 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340742 3989 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340748 3989 flags.go:64] FLAG: --runonce="false" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340753 3989 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340758 3989 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340762 3989 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340767 3989 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340771 3989 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340776 3989 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340780 3989 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340785 3989 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340789 3989 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340794 3989 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340798 3989 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340802 3989 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340808 3989 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340813 3989 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340817 3989 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:36:36.343635 master-0 kubenswrapper[3989]: I0313 12:36:36.340825 3989 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340829 3989 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340834 3989 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340839 3989 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340844 3989 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340848 3989 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340852 3989 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340869 3989 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340875 3989 flags.go:64] FLAG: --v="2" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340893 3989 flags.go:64] FLAG: --version="false" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340899 3989 flags.go:64] FLAG: --vmodule="" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340904 3989 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: I0313 12:36:36.340909 3989 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341024 3989 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341032 3989 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341037 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341041 3989 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341046 3989 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341051 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341055 3989 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341060 3989 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341064 3989 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341069 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:36:36.344265 master-0 kubenswrapper[3989]: W0313 12:36:36.341074 3989 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341078 3989 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341082 3989 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341087 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341091 3989 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341096 3989 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341100 3989 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341104 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341108 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341113 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341118 3989 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341123 3989 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341127 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341131 3989 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341135 3989 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341138 3989 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341148 3989 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341153 3989 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341158 3989 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341163 3989 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:36:36.344887 master-0 kubenswrapper[3989]: W0313 12:36:36.341167 3989 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341172 3989 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341176 3989 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341184 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341192 3989 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341198 3989 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341203 3989 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341208 3989 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341215 3989 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341221 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341226 3989 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341232 3989 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341236 3989 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341241 3989 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341247 3989 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341251 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341256 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341260 3989 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341264 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341269 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:36:36.345442 master-0 kubenswrapper[3989]: W0313 12:36:36.341272 3989 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341278 3989 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341283 3989 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341288 3989 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341293 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341297 3989 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341304 3989 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341310 3989 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341325 3989 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341330 3989 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341335 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341339 3989 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341344 3989 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341349 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341354 3989 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341358 3989 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341362 3989 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341366 3989 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341369 3989 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:36:36.345977 master-0 kubenswrapper[3989]: W0313 12:36:36.341373 3989 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:36:36.346479 master-0 kubenswrapper[3989]: W0313 12:36:36.341377 3989 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:36:36.346479 master-0 kubenswrapper[3989]: W0313 12:36:36.341380 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:36:36.346479 master-0 kubenswrapper[3989]: I0313 12:36:36.342905 3989 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:36:36.352591 master-0 kubenswrapper[3989]: I0313 12:36:36.352479 3989 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:36:36.352591 master-0 kubenswrapper[3989]: I0313 12:36:36.352550 3989 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352633 3989 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352642 3989 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352647 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352652 3989 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352656 3989 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352660 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352664 3989 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352668 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352672 3989 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352678 3989 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352685 3989 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352690 3989 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352695 3989 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352700 3989 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352704 3989 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352708 3989 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352712 3989 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352715 3989 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:36:36.352752 master-0 kubenswrapper[3989]: W0313 12:36:36.352720 3989 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352724 3989 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352727 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352731 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352734 3989 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352738 3989 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352741 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352745 3989 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352748 3989 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352753 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352757 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352762 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352766 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352770 3989 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352774 3989 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352779 3989 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352792 3989 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352796 3989 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352800 3989 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352804 3989 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:36:36.353318 master-0 kubenswrapper[3989]: W0313 12:36:36.352807 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352811 3989 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352815 3989 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352819 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352822 3989 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352826 3989 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352830 3989 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352833 3989 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352837 3989 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352841 3989 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352844 3989 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352848 3989 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352852 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352856 3989 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352861 3989 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352865 3989 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352868 3989 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352872 3989 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352876 3989 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352879 3989 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:36:36.353857 master-0 kubenswrapper[3989]: W0313 12:36:36.352883 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352887 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352892 3989 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352896 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352900 3989 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352904 3989 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352908 3989 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352912 3989 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352915 3989 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352919 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352923 3989 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352926 3989 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352932 3989 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: W0313 12:36:36.352936 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:36:36.354726 master-0 kubenswrapper[3989]: I0313 12:36:36.352942 3989 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353047 3989 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353054 3989 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353058 3989 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353062 3989 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353065 3989 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353071 3989 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353077 3989 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353081 3989 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353084 3989 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353089 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353093 3989 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353097 3989 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353103 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353107 3989 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353112 3989 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353115 3989 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353119 3989 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353124 3989 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:36:36.355130 master-0 kubenswrapper[3989]: W0313 12:36:36.353128 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353133 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353137 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353141 3989 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353145 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353148 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353152 3989 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353156 3989 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353159 3989 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353163 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353167 3989 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353171 3989 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353175 3989 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353178 3989 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353182 3989 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353186 3989 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353189 3989 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353193 3989 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353196 3989 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353202 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:36:36.355866 master-0 kubenswrapper[3989]: W0313 12:36:36.353206 3989 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353209 3989 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353213 3989 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353216 3989 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353221 3989 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353226 3989 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353231 3989 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353235 3989 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353240 3989 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353244 3989 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353248 3989 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353252 3989 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353256 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353260 3989 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353263 3989 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353267 3989 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353271 3989 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353274 3989 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353278 3989 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:36:36.356442 master-0 kubenswrapper[3989]: W0313 12:36:36.353281 3989 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353285 3989 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353289 3989 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353292 3989 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353296 3989 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353300 3989 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353303 3989 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353307 3989 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353310 3989 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353314 3989 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353317 3989 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353321 3989 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353326 3989 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353335 3989 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: W0313 12:36:36.353339 3989 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:36:36.357034 master-0 kubenswrapper[3989]: I0313 12:36:36.353346 3989 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:36:36.357588 master-0 kubenswrapper[3989]: I0313 12:36:36.353507 3989 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:36:36.357588 master-0 kubenswrapper[3989]: I0313 12:36:36.357279 3989 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 12:36:36.359457 master-0 kubenswrapper[3989]: I0313 12:36:36.359416 3989 server.go:997] "Starting client certificate rotation" Mar 13 12:36:36.359515 master-0 kubenswrapper[3989]: I0313 12:36:36.359466 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:36:36.359661 master-0 kubenswrapper[3989]: I0313 12:36:36.359627 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:36:36.384772 master-0 kubenswrapper[3989]: I0313 12:36:36.384705 3989 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:36:36.386792 master-0 kubenswrapper[3989]: I0313 12:36:36.386731 3989 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:36:36.389994 master-0 kubenswrapper[3989]: E0313 12:36:36.389942 3989 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:36.402360 master-0 kubenswrapper[3989]: I0313 12:36:36.402299 3989 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:36:36.410096 master-0 kubenswrapper[3989]: I0313 12:36:36.410045 3989 log.go:25] "Validated CRI v1 image API" Mar 13 12:36:36.412973 master-0 kubenswrapper[3989]: I0313 12:36:36.412924 3989 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:36:36.417527 master-0 kubenswrapper[3989]: I0313 12:36:36.417471 3989 fs.go:135] Filesystem UUIDs: map[1540ec0a-5f02-47ef-9901-1615d58a2814:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 12:36:36.417527 master-0 kubenswrapper[3989]: I0313 12:36:36.417507 3989 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 13 12:36:36.443415 master-0 kubenswrapper[3989]: I0313 12:36:36.442066 3989 manager.go:217] Machine: {Timestamp:2026-03-13 12:36:36.438591206 +0000 UTC m=+0.557058843 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:fe2021b5fe9941cbb2f9ca5654d6ac6f SystemUUID:fe2021b5-fe99-41cb-b2f9-ca5654d6ac6f BootID:1315907d-16f0-44fe-950e-68be880afcd6 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fc:21:de Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:f6:fc:d3:7e:3d:76 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:36:36.443415 master-0 kubenswrapper[3989]: I0313 12:36:36.443325 3989 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:36:36.443904 master-0 kubenswrapper[3989]: I0313 12:36:36.443755 3989 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:36:36.446559 master-0 kubenswrapper[3989]: I0313 12:36:36.446461 3989 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:36:36.447051 master-0 kubenswrapper[3989]: I0313 12:36:36.446933 3989 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:36:36.447784 master-0 kubenswrapper[3989]: I0313 12:36:36.447106 3989 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:36:36.447851 master-0 kubenswrapper[3989]: I0313 12:36:36.447817 3989 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:36:36.447851 master-0 kubenswrapper[3989]: I0313 12:36:36.447833 3989 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:36:36.448176 master-0 kubenswrapper[3989]: I0313 12:36:36.448127 3989 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:36:36.448252 master-0 kubenswrapper[3989]: I0313 12:36:36.448181 3989 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:36:36.448558 master-0 kubenswrapper[3989]: I0313 12:36:36.448521 3989 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:36:36.448828 master-0 kubenswrapper[3989]: I0313 12:36:36.448793 3989 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:36:36.455405 master-0 kubenswrapper[3989]: I0313 12:36:36.455324 3989 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:36:36.455560 master-0 kubenswrapper[3989]: I0313 12:36:36.455529 3989 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:36:36.455692 master-0 kubenswrapper[3989]: I0313 12:36:36.455634 3989 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:36:36.455692 master-0 kubenswrapper[3989]: I0313 12:36:36.455657 3989 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:36:36.455758 master-0 kubenswrapper[3989]: I0313 12:36:36.455693 3989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:36:36.461722 master-0 kubenswrapper[3989]: I0313 12:36:36.461664 3989 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:36:36.464227 master-0 kubenswrapper[3989]: I0313 12:36:36.464189 3989 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:36:36.464433 master-0 kubenswrapper[3989]: W0313 12:36:36.464371 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:36.464470 master-0 kubenswrapper[3989]: W0313 12:36:36.464423 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:36.464499 master-0 kubenswrapper[3989]: I0313 12:36:36.464482 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:36:36.464526 master-0 kubenswrapper[3989]: E0313 12:36:36.464490 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:36.464526 master-0 kubenswrapper[3989]: I0313 12:36:36.464511 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:36:36.464526 master-0 kubenswrapper[3989]: I0313 12:36:36.464525 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:36:36.464634 master-0 kubenswrapper[3989]: I0313 12:36:36.464539 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:36:36.464634 master-0 kubenswrapper[3989]: I0313 12:36:36.464548 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:36:36.464634 master-0 kubenswrapper[3989]: I0313 12:36:36.464559 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:36:36.464634 master-0 kubenswrapper[3989]: E0313 12:36:36.464494 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:36.464634 master-0 kubenswrapper[3989]: I0313 12:36:36.464597 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:36:36.464754 master-0 kubenswrapper[3989]: I0313 12:36:36.464687 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:36:36.464754 master-0 kubenswrapper[3989]: I0313 12:36:36.464709 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:36:36.464754 master-0 kubenswrapper[3989]: I0313 12:36:36.464721 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:36:36.464754 master-0 kubenswrapper[3989]: I0313 12:36:36.464733 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:36:36.464946 master-0 kubenswrapper[3989]: I0313 12:36:36.464917 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:36:36.466628 master-0 kubenswrapper[3989]: I0313 12:36:36.466594 3989 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:36:36.467337 master-0 kubenswrapper[3989]: I0313 12:36:36.467301 3989 server.go:1280] "Started kubelet" Mar 13 12:36:36.468333 master-0 kubenswrapper[3989]: I0313 12:36:36.468185 3989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:36:36.468333 master-0 kubenswrapper[3989]: I0313 12:36:36.468299 3989 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:36:36.468438 master-0 kubenswrapper[3989]: I0313 12:36:36.468405 3989 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:36:36.469034 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:36:36.469158 master-0 kubenswrapper[3989]: I0313 12:36:36.469060 3989 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:36:36.470375 master-0 kubenswrapper[3989]: I0313 12:36:36.470340 3989 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:36:36.471506 master-0 kubenswrapper[3989]: I0313 12:36:36.471449 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:36.474036 master-0 kubenswrapper[3989]: I0313 12:36:36.473976 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:36:36.474415 master-0 kubenswrapper[3989]: I0313 12:36:36.474369 3989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: I0313 12:36:36.474476 3989 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: I0313 12:36:36.474497 3989 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: I0313 12:36:36.474644 3989 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: E0313 12:36:36.474686 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: I0313 12:36:36.474697 3989 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:36:36.475168 master-0 kubenswrapper[3989]: I0313 12:36:36.474740 3989 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:36:36.475676 master-0 kubenswrapper[3989]: W0313 12:36:36.475543 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:36.475676 master-0 kubenswrapper[3989]: E0313 12:36:36.475618 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:36.477193 master-0 kubenswrapper[3989]: E0313 12:36:36.477145 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:36:36.478033 master-0 kubenswrapper[3989]: E0313 12:36:36.474320 3989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c66cfeb312158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,LastTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:36.488496 master-0 kubenswrapper[3989]: I0313 12:36:36.488468 3989 factory.go:55] Registering systemd factory Mar 13 12:36:36.488655 master-0 kubenswrapper[3989]: I0313 12:36:36.488641 3989 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:36:36.489828 master-0 kubenswrapper[3989]: I0313 12:36:36.489780 3989 factory.go:153] Registering CRI-O factory Mar 13 12:36:36.489899 master-0 kubenswrapper[3989]: I0313 12:36:36.489856 3989 factory.go:221] Registration of the crio container factory successfully Mar 13 12:36:36.490049 master-0 kubenswrapper[3989]: I0313 12:36:36.490022 3989 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:36:36.490142 master-0 kubenswrapper[3989]: I0313 12:36:36.490124 3989 factory.go:103] Registering Raw factory Mar 13 12:36:36.490185 master-0 kubenswrapper[3989]: I0313 12:36:36.490155 3989 manager.go:1196] Started watching for new ooms in manager Mar 13 12:36:36.491225 master-0 kubenswrapper[3989]: E0313 12:36:36.491185 3989 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 12:36:36.493567 master-0 kubenswrapper[3989]: I0313 12:36:36.493502 3989 manager.go:319] Starting recovery of all containers Mar 13 12:36:36.517593 master-0 kubenswrapper[3989]: I0313 12:36:36.517521 3989 manager.go:324] Recovery completed Mar 13 12:36:36.527600 master-0 kubenswrapper[3989]: I0313 12:36:36.527330 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:36.557246 master-0 kubenswrapper[3989]: I0313 12:36:36.557152 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:36.557246 master-0 kubenswrapper[3989]: I0313 12:36:36.557238 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:36.557246 master-0 kubenswrapper[3989]: I0313 12:36:36.557249 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:36.558083 master-0 kubenswrapper[3989]: I0313 12:36:36.558005 3989 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:36:36.558083 master-0 kubenswrapper[3989]: I0313 12:36:36.558068 3989 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:36:36.558157 master-0 kubenswrapper[3989]: I0313 12:36:36.558099 3989 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:36:36.575817 master-0 kubenswrapper[3989]: E0313 12:36:36.575746 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:36.676193 master-0 kubenswrapper[3989]: E0313 12:36:36.676120 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:36.678757 master-0 kubenswrapper[3989]: E0313 12:36:36.678714 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:36:36.739947 master-0 kubenswrapper[3989]: I0313 12:36:36.739775 3989 policy_none.go:49] "None policy: Start" Mar 13 12:36:36.741567 master-0 kubenswrapper[3989]: I0313 12:36:36.741541 3989 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:36:36.741656 master-0 kubenswrapper[3989]: I0313 12:36:36.741595 3989 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:36:36.777192 master-0 kubenswrapper[3989]: E0313 12:36:36.777041 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: E0313 12:36:36.877707 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.934822 3989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.936898 3989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.936970 3989 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.937031 3989 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: E0313 12:36:36.937167 3989 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: W0313 12:36:36.958777 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: E0313 12:36:36.958853 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: E0313 12:36:36.978495 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988291 3989 manager.go:334] "Starting Device Plugin manager" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988360 3989 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988378 3989 server.go:79] "Starting device plugin registration server" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988721 3989 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988741 3989 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.988976 3989 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.989154 3989 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:36.989165 3989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: E0313 12:36:36.990286 3989 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.037835 3989 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.037999 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.039468 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.039505 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.039541 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.039701 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.040182 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.040343 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.040817 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.040891 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.040905 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041228 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041330 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041359 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041379 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041643 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.041773 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042286 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042324 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042333 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042454 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042570 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042624 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.074804 master-0 kubenswrapper[3989]: I0313 12:36:37.042800 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.042830 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.042843 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043086 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043106 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043115 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043304 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043372 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043398 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043666 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043733 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.043760 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044464 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044496 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044509 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044703 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044738 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.044750 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.047159 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.047189 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.048378 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.048400 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.077010 master-0 kubenswrapper[3989]: I0313 12:36:37.048411 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.078809 master-0 kubenswrapper[3989]: I0313 12:36:37.078764 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.078877 master-0 kubenswrapper[3989]: I0313 12:36:37.078807 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.078877 master-0 kubenswrapper[3989]: I0313 12:36:37.078847 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.078877 master-0 kubenswrapper[3989]: I0313 12:36:37.078869 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.078960 master-0 kubenswrapper[3989]: I0313 12:36:37.078884 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.078960 master-0 kubenswrapper[3989]: I0313 12:36:37.078902 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.078960 master-0 kubenswrapper[3989]: I0313 12:36:37.078933 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.079099 master-0 kubenswrapper[3989]: I0313 12:36:37.078983 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.079099 master-0 kubenswrapper[3989]: I0313 12:36:37.079038 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.079099 master-0 kubenswrapper[3989]: I0313 12:36:37.079068 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.079099 master-0 kubenswrapper[3989]: I0313 12:36:37.079086 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.079099 master-0 kubenswrapper[3989]: I0313 12:36:37.079102 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.079253 master-0 kubenswrapper[3989]: I0313 12:36:37.079124 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.079253 master-0 kubenswrapper[3989]: I0313 12:36:37.079141 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.079253 master-0 kubenswrapper[3989]: I0313 12:36:37.079158 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.079253 master-0 kubenswrapper[3989]: I0313 12:36:37.079174 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.079253 master-0 kubenswrapper[3989]: I0313 12:36:37.079212 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.080199 master-0 kubenswrapper[3989]: E0313 12:36:37.080039 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:36:37.089713 master-0 kubenswrapper[3989]: I0313 12:36:37.089661 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.090677 master-0 kubenswrapper[3989]: I0313 12:36:37.090616 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.090677 master-0 kubenswrapper[3989]: I0313 12:36:37.090649 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.090677 master-0 kubenswrapper[3989]: I0313 12:36:37.090662 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.090915 master-0 kubenswrapper[3989]: I0313 12:36:37.090733 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:37.091894 master-0 kubenswrapper[3989]: E0313 12:36:37.091825 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:37.179667 master-0 kubenswrapper[3989]: I0313 12:36:37.179497 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.179667 master-0 kubenswrapper[3989]: I0313 12:36:37.179560 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.179667 master-0 kubenswrapper[3989]: I0313 12:36:37.179597 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.179667 master-0 kubenswrapper[3989]: I0313 12:36:37.179615 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179715 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179833 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179834 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179916 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179942 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.179988 master-0 kubenswrapper[3989]: I0313 12:36:37.179958 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.179995 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.179998 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.180011 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.180046 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.180083 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.180113 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180215 master-0 kubenswrapper[3989]: I0313 12:36:37.180136 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180233 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180312 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180336 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180347 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180354 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180367 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180374 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180392 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180397 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.180396 master-0 kubenswrapper[3989]: I0313 12:36:37.180404 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180423 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180449 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180448 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180480 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180503 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180517 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.180693 master-0 kubenswrapper[3989]: I0313 12:36:37.180550 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.292531 master-0 kubenswrapper[3989]: I0313 12:36:37.292419 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.293525 master-0 kubenswrapper[3989]: I0313 12:36:37.293463 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.293525 master-0 kubenswrapper[3989]: I0313 12:36:37.293526 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.293682 master-0 kubenswrapper[3989]: I0313 12:36:37.293535 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.293682 master-0 kubenswrapper[3989]: I0313 12:36:37.293599 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:37.294361 master-0 kubenswrapper[3989]: E0313 12:36:37.294322 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:37.375132 master-0 kubenswrapper[3989]: I0313 12:36:37.374997 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:36:37.383715 master-0 kubenswrapper[3989]: I0313 12:36:37.383543 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:36:37.390336 master-0 kubenswrapper[3989]: W0313 12:36:37.390275 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:37.390421 master-0 kubenswrapper[3989]: E0313 12:36:37.390333 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:37.410758 master-0 kubenswrapper[3989]: I0313 12:36:37.410677 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:36:37.432885 master-0 kubenswrapper[3989]: I0313 12:36:37.432810 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:37.442026 master-0 kubenswrapper[3989]: I0313 12:36:37.441963 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:37.473230 master-0 kubenswrapper[3989]: I0313 12:36:37.473156 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:37.651481 master-0 kubenswrapper[3989]: W0313 12:36:37.651386 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:37.651481 master-0 kubenswrapper[3989]: E0313 12:36:37.651473 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:37.695563 master-0 kubenswrapper[3989]: I0313 12:36:37.695490 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:37.696599 master-0 kubenswrapper[3989]: I0313 12:36:37.696534 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:37.696676 master-0 kubenswrapper[3989]: I0313 12:36:37.696654 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:37.696719 master-0 kubenswrapper[3989]: I0313 12:36:37.696675 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:37.696757 master-0 kubenswrapper[3989]: I0313 12:36:37.696749 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:37.698076 master-0 kubenswrapper[3989]: E0313 12:36:37.698012 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:37.882070 master-0 kubenswrapper[3989]: E0313 12:36:37.881928 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:36:37.994240 master-0 kubenswrapper[3989]: W0313 12:36:37.994018 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:37.994240 master-0 kubenswrapper[3989]: E0313 12:36:37.994124 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:38.037991 master-0 kubenswrapper[3989]: W0313 12:36:38.037760 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d WatchSource:0}: Error finding container aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d: Status 404 returned error can't find the container with id aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d Mar 13 12:36:38.040116 master-0 kubenswrapper[3989]: W0313 12:36:38.040026 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226 WatchSource:0}: Error finding container b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226: Status 404 returned error can't find the container with id b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226 Mar 13 12:36:38.043497 master-0 kubenswrapper[3989]: I0313 12:36:38.043469 3989 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:36:38.069083 master-0 kubenswrapper[3989]: W0313 12:36:38.069008 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77 WatchSource:0}: Error finding container d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77: Status 404 returned error can't find the container with id d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77 Mar 13 12:36:38.093310 master-0 kubenswrapper[3989]: W0313 12:36:38.093248 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f77c8e18b751d90bc0dfe2d4e304050.slice/crio-8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15 WatchSource:0}: Error finding container 8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15: Status 404 returned error can't find the container with id 8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15 Mar 13 12:36:38.113853 master-0 kubenswrapper[3989]: W0313 12:36:38.113789 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04 WatchSource:0}: Error finding container 9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04: Status 404 returned error can't find the container with id 9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04 Mar 13 12:36:38.426159 master-0 kubenswrapper[3989]: I0313 12:36:38.425991 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:36:38.427445 master-0 kubenswrapper[3989]: E0313 12:36:38.427423 3989 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:38.431612 master-0 kubenswrapper[3989]: W0313 12:36:38.431500 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:38.431612 master-0 kubenswrapper[3989]: E0313 12:36:38.431587 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:38.472522 master-0 kubenswrapper[3989]: I0313 12:36:38.472408 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:38.499105 master-0 kubenswrapper[3989]: I0313 12:36:38.498998 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:38.505425 master-0 kubenswrapper[3989]: I0313 12:36:38.505335 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:38.505425 master-0 kubenswrapper[3989]: I0313 12:36:38.505384 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:38.505425 master-0 kubenswrapper[3989]: I0313 12:36:38.505410 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:38.505872 master-0 kubenswrapper[3989]: I0313 12:36:38.505475 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:38.506528 master-0 kubenswrapper[3989]: E0313 12:36:38.506470 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:38.944142 master-0 kubenswrapper[3989]: I0313 12:36:38.944023 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04"} Mar 13 12:36:38.945445 master-0 kubenswrapper[3989]: I0313 12:36:38.945414 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15"} Mar 13 12:36:38.946448 master-0 kubenswrapper[3989]: I0313 12:36:38.946416 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77"} Mar 13 12:36:38.947454 master-0 kubenswrapper[3989]: I0313 12:36:38.947400 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226"} Mar 13 12:36:38.948511 master-0 kubenswrapper[3989]: I0313 12:36:38.948478 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d"} Mar 13 12:36:39.472972 master-0 kubenswrapper[3989]: I0313 12:36:39.472911 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:39.484048 master-0 kubenswrapper[3989]: E0313 12:36:39.483986 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:36:39.651861 master-0 kubenswrapper[3989]: E0313 12:36:39.651661 3989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c66cfeb312158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,LastTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:39.942373 master-0 kubenswrapper[3989]: W0313 12:36:39.942302 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:39.942373 master-0 kubenswrapper[3989]: E0313 12:36:39.942383 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:40.107332 master-0 kubenswrapper[3989]: I0313 12:36:40.106828 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:40.109204 master-0 kubenswrapper[3989]: I0313 12:36:40.109147 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:40.109204 master-0 kubenswrapper[3989]: I0313 12:36:40.109186 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:40.109204 master-0 kubenswrapper[3989]: I0313 12:36:40.109197 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:40.109369 master-0 kubenswrapper[3989]: I0313 12:36:40.109259 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:40.109985 master-0 kubenswrapper[3989]: E0313 12:36:40.109954 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:40.730720 master-0 kubenswrapper[3989]: I0313 12:36:40.730549 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:40.736094 master-0 kubenswrapper[3989]: W0313 12:36:40.736036 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:40.736380 master-0 kubenswrapper[3989]: E0313 12:36:40.736150 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:40.772143 master-0 kubenswrapper[3989]: W0313 12:36:40.771998 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:40.772143 master-0 kubenswrapper[3989]: E0313 12:36:40.772054 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:40.955366 master-0 kubenswrapper[3989]: I0313 12:36:40.955283 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700"} Mar 13 12:36:40.955721 master-0 kubenswrapper[3989]: I0313 12:36:40.955410 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:40.956246 master-0 kubenswrapper[3989]: I0313 12:36:40.956179 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:40.956246 master-0 kubenswrapper[3989]: I0313 12:36:40.956205 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:40.956246 master-0 kubenswrapper[3989]: I0313 12:36:40.956214 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:41.174150 master-0 kubenswrapper[3989]: W0313 12:36:41.174086 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:41.174376 master-0 kubenswrapper[3989]: E0313 12:36:41.174164 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:41.472551 master-0 kubenswrapper[3989]: I0313 12:36:41.472500 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:42.011525 master-0 kubenswrapper[3989]: I0313 12:36:42.011425 3989 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700" exitCode=0 Mar 13 12:36:42.011525 master-0 kubenswrapper[3989]: I0313 12:36:42.011487 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700"} Mar 13 12:36:42.012255 master-0 kubenswrapper[3989]: I0313 12:36:42.011531 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:42.014844 master-0 kubenswrapper[3989]: I0313 12:36:42.014162 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:42.014844 master-0 kubenswrapper[3989]: I0313 12:36:42.014200 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:42.014844 master-0 kubenswrapper[3989]: I0313 12:36:42.014216 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:42.473904 master-0 kubenswrapper[3989]: I0313 12:36:42.473766 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:42.677654 master-0 kubenswrapper[3989]: I0313 12:36:42.677547 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:36:42.678965 master-0 kubenswrapper[3989]: E0313 12:36:42.678865 3989 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:42.684949 master-0 kubenswrapper[3989]: E0313 12:36:42.684908 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:36:43.016540 master-0 kubenswrapper[3989]: I0313 12:36:43.016380 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf"} Mar 13 12:36:43.016540 master-0 kubenswrapper[3989]: I0313 12:36:43.016440 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:43.016540 master-0 kubenswrapper[3989]: I0313 12:36:43.016446 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d"} Mar 13 12:36:43.017892 master-0 kubenswrapper[3989]: I0313 12:36:43.017838 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:43.017946 master-0 kubenswrapper[3989]: I0313 12:36:43.017893 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:43.017946 master-0 kubenswrapper[3989]: I0313 12:36:43.017907 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:43.019583 master-0 kubenswrapper[3989]: I0313 12:36:43.019536 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 12:36:43.019981 master-0 kubenswrapper[3989]: I0313 12:36:43.019955 3989 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="660f6c5ce48c550b172983408c27d255ddfe0b58a32258eaf8287660b2644303" exitCode=1 Mar 13 12:36:43.020060 master-0 kubenswrapper[3989]: I0313 12:36:43.019994 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"660f6c5ce48c550b172983408c27d255ddfe0b58a32258eaf8287660b2644303"} Mar 13 12:36:43.020102 master-0 kubenswrapper[3989]: I0313 12:36:43.020057 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:43.020819 master-0 kubenswrapper[3989]: I0313 12:36:43.020788 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:43.020819 master-0 kubenswrapper[3989]: I0313 12:36:43.020818 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:43.020911 master-0 kubenswrapper[3989]: I0313 12:36:43.020829 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:43.021143 master-0 kubenswrapper[3989]: I0313 12:36:43.021123 3989 scope.go:117] "RemoveContainer" containerID="660f6c5ce48c550b172983408c27d255ddfe0b58a32258eaf8287660b2644303" Mar 13 12:36:43.311133 master-0 kubenswrapper[3989]: I0313 12:36:43.311016 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:43.312141 master-0 kubenswrapper[3989]: I0313 12:36:43.312074 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:43.312227 master-0 kubenswrapper[3989]: I0313 12:36:43.312157 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:43.312227 master-0 kubenswrapper[3989]: I0313 12:36:43.312172 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:43.312317 master-0 kubenswrapper[3989]: I0313 12:36:43.312241 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:43.313446 master-0 kubenswrapper[3989]: E0313 12:36:43.313411 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:36:43.473185 master-0 kubenswrapper[3989]: I0313 12:36:43.473096 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:44.025912 master-0 kubenswrapper[3989]: I0313 12:36:44.025841 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:36:44.027035 master-0 kubenswrapper[3989]: I0313 12:36:44.026454 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 12:36:44.027035 master-0 kubenswrapper[3989]: I0313 12:36:44.026889 3989 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf" exitCode=1 Mar 13 12:36:44.027035 master-0 kubenswrapper[3989]: I0313 12:36:44.026977 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:44.027239 master-0 kubenswrapper[3989]: I0313 12:36:44.027047 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:44.027505 master-0 kubenswrapper[3989]: I0313 12:36:44.027415 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf"} Mar 13 12:36:44.027505 master-0 kubenswrapper[3989]: I0313 12:36:44.027484 3989 scope.go:117] "RemoveContainer" containerID="660f6c5ce48c550b172983408c27d255ddfe0b58a32258eaf8287660b2644303" Mar 13 12:36:44.027968 master-0 kubenswrapper[3989]: I0313 12:36:44.027951 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:44.028014 master-0 kubenswrapper[3989]: I0313 12:36:44.027973 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:44.028014 master-0 kubenswrapper[3989]: I0313 12:36:44.027981 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:44.028480 master-0 kubenswrapper[3989]: I0313 12:36:44.028458 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:44.028480 master-0 kubenswrapper[3989]: I0313 12:36:44.028476 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:44.028641 master-0 kubenswrapper[3989]: I0313 12:36:44.028485 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:44.028784 master-0 kubenswrapper[3989]: I0313 12:36:44.028754 3989 scope.go:117] "RemoveContainer" containerID="d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf" Mar 13 12:36:44.028908 master-0 kubenswrapper[3989]: E0313 12:36:44.028870 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:36:44.367782 master-0 kubenswrapper[3989]: W0313 12:36:44.367576 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:44.367782 master-0 kubenswrapper[3989]: E0313 12:36:44.367681 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:44.473393 master-0 kubenswrapper[3989]: I0313 12:36:44.473323 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:45.029627 master-0 kubenswrapper[3989]: I0313 12:36:45.029107 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:45.031324 master-0 kubenswrapper[3989]: I0313 12:36:45.031298 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:45.031454 master-0 kubenswrapper[3989]: I0313 12:36:45.031337 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:45.031454 master-0 kubenswrapper[3989]: I0313 12:36:45.031349 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:45.031737 master-0 kubenswrapper[3989]: I0313 12:36:45.031717 3989 scope.go:117] "RemoveContainer" containerID="d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf" Mar 13 12:36:45.031915 master-0 kubenswrapper[3989]: E0313 12:36:45.031888 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:36:45.172465 master-0 kubenswrapper[3989]: W0313 12:36:45.172379 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:45.172465 master-0 kubenswrapper[3989]: E0313 12:36:45.172462 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:45.470956 master-0 kubenswrapper[3989]: W0313 12:36:45.470885 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:45.470956 master-0 kubenswrapper[3989]: E0313 12:36:45.470954 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:45.472316 master-0 kubenswrapper[3989]: I0313 12:36:45.472279 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:46.473527 master-0 kubenswrapper[3989]: I0313 12:36:46.473357 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:46.502885 master-0 kubenswrapper[3989]: W0313 12:36:46.502781 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:46.502885 master-0 kubenswrapper[3989]: E0313 12:36:46.502871 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:36:46.991012 master-0 kubenswrapper[3989]: E0313 12:36:46.990889 3989 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:36:47.035255 master-0 kubenswrapper[3989]: I0313 12:36:47.035206 3989 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6" exitCode=0 Mar 13 12:36:47.035353 master-0 kubenswrapper[3989]: I0313 12:36:47.035283 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6"} Mar 13 12:36:47.035353 master-0 kubenswrapper[3989]: I0313 12:36:47.035338 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:47.036204 master-0 kubenswrapper[3989]: I0313 12:36:47.036175 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:47.036267 master-0 kubenswrapper[3989]: I0313 12:36:47.036210 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:47.036267 master-0 kubenswrapper[3989]: I0313 12:36:47.036223 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:47.037232 master-0 kubenswrapper[3989]: I0313 12:36:47.037209 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:36:47.038909 master-0 kubenswrapper[3989]: I0313 12:36:47.038888 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:47.040618 master-0 kubenswrapper[3989]: I0313 12:36:47.040554 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:47.040699 master-0 kubenswrapper[3989]: I0313 12:36:47.040638 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:47.040699 master-0 kubenswrapper[3989]: I0313 12:36:47.040653 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:47.041285 master-0 kubenswrapper[3989]: I0313 12:36:47.041257 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a"} Mar 13 12:36:47.041437 master-0 kubenswrapper[3989]: I0313 12:36:47.041404 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:47.042241 master-0 kubenswrapper[3989]: I0313 12:36:47.042218 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:47.042311 master-0 kubenswrapper[3989]: I0313 12:36:47.042261 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:47.042311 master-0 kubenswrapper[3989]: I0313 12:36:47.042279 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:47.473301 master-0 kubenswrapper[3989]: I0313 12:36:47.473223 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:36:48.044724 master-0 kubenswrapper[3989]: I0313 12:36:48.044573 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e"} Mar 13 12:36:48.046464 master-0 kubenswrapper[3989]: I0313 12:36:48.046420 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"9cc438a36a13c0e2e1f239bcab312b0eda7119d2153cef22f48639612d94c13e"} Mar 13 12:36:48.046525 master-0 kubenswrapper[3989]: I0313 12:36:48.046465 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:48.047238 master-0 kubenswrapper[3989]: I0313 12:36:48.047210 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:48.047311 master-0 kubenswrapper[3989]: I0313 12:36:48.047242 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:48.047311 master-0 kubenswrapper[3989]: I0313 12:36:48.047255 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:49.118935 master-0 kubenswrapper[3989]: E0313 12:36:49.118871 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:36:49.119721 master-0 kubenswrapper[3989]: I0313 12:36:49.119680 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:49.473794 master-0 kubenswrapper[3989]: I0313 12:36:49.473747 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:49.658621 master-0 kubenswrapper[3989]: E0313 12:36:49.658001 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cfeb312158 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,LastTimestamp:2026-03-13 12:36:36.467261784 +0000 UTC m=+0.585729421,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.665753 master-0 kubenswrapper[3989]: E0313 12:36:49.663014 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.667795 master-0 kubenswrapper[3989]: E0313 12:36:49.667642 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.672225 master-0 kubenswrapper[3989]: E0313 12:36:49.672025 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.676315 master-0 kubenswrapper[3989]: E0313 12:36:49.676186 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66d00af91092 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:37.000458386 +0000 UTC m=+1.118926033,LastTimestamp:2026-03-13 12:36:37.000458386 +0000 UTC m=+1.118926033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.683534 master-0 kubenswrapper[3989]: E0313 12:36:49.683375 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.03948876 +0000 UTC m=+1.157956407,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.690854 master-0 kubenswrapper[3989]: E0313 12:36:49.690670 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.039532321 +0000 UTC m=+1.157999958,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.697925 master-0 kubenswrapper[3989]: E0313 12:36:49.697454 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.039547942 +0000 UTC m=+1.158015579,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.702547 master-0 kubenswrapper[3989]: E0313 12:36:49.702476 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.040873639 +0000 UTC m=+1.159341286,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.714665 master-0 kubenswrapper[3989]: I0313 12:36:49.714587 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:49.735192 master-0 kubenswrapper[3989]: I0313 12:36:49.733252 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:49.735192 master-0 kubenswrapper[3989]: I0313 12:36:49.733302 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:49.735192 master-0 kubenswrapper[3989]: I0313 12:36:49.733314 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:49.735192 master-0 kubenswrapper[3989]: I0313 12:36:49.733382 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:49.735531 master-0 kubenswrapper[3989]: E0313 12:36:49.735294 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.040900339 +0000 UTC m=+1.159367996,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.745629 master-0 kubenswrapper[3989]: E0313 12:36:49.745549 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:36:49.745822 master-0 kubenswrapper[3989]: E0313 12:36:49.745623 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.04091166 +0000 UTC m=+1.159379307,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.752382 master-0 kubenswrapper[3989]: E0313 12:36:49.752226 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.041343561 +0000 UTC m=+1.159811198,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.757881 master-0 kubenswrapper[3989]: E0313 12:36:49.757684 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.041371742 +0000 UTC m=+1.159839379,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.763185 master-0 kubenswrapper[3989]: E0313 12:36:49.763009 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.041386242 +0000 UTC m=+1.159853879,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.770158 master-0 kubenswrapper[3989]: E0313 12:36:49.770050 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.042304848 +0000 UTC m=+1.160772485,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.778087 master-0 kubenswrapper[3989]: E0313 12:36:49.777973 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.042330528 +0000 UTC m=+1.160798165,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.782810 master-0 kubenswrapper[3989]: E0313 12:36:49.782709 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.042339689 +0000 UTC m=+1.160807326,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.787879 master-0 kubenswrapper[3989]: E0313 12:36:49.787775 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.042815132 +0000 UTC m=+1.161282769,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.792982 master-0 kubenswrapper[3989]: E0313 12:36:49.792899 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.042839602 +0000 UTC m=+1.161307239,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.796942 master-0 kubenswrapper[3989]: E0313 12:36:49.796877 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.042849723 +0000 UTC m=+1.161317360,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.801177 master-0 kubenswrapper[3989]: E0313 12:36:49.801061 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.04309706 +0000 UTC m=+1.161564697,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.805143 master-0 kubenswrapper[3989]: E0313 12:36:49.805031 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.04311208 +0000 UTC m=+1.161579717,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.810719 master-0 kubenswrapper[3989]: E0313 12:36:49.810639 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08ea09b\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08ea09b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557275291 +0000 UTC m=+0.675742928,LastTimestamp:2026-03-13 12:36:37.04311955 +0000 UTC m=+1.161587187,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.815550 master-0 kubenswrapper[3989]: E0313 12:36:49.815395 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08dcecb\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08dcecb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.557221579 +0000 UTC m=+0.675689216,LastTimestamp:2026-03-13 12:36:37.043339536 +0000 UTC m=+1.161807213,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.819840 master-0 kubenswrapper[3989]: E0313 12:36:49.819717 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c66cff08e2bd8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c66cff08e2bd8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:36.5572454 +0000 UTC m=+0.675713037,LastTimestamp:2026-03-13 12:36:37.043388528 +0000 UTC m=+1.161856205,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.826324 master-0 kubenswrapper[3989]: E0313 12:36:49.826212 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d049231f06 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:38.04340199 +0000 UTC m=+2.161869627,LastTimestamp:2026-03-13 12:36:38.04340199 +0000 UTC m=+2.161869627,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.831248 master-0 kubenswrapper[3989]: E0313 12:36:49.831124 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d04923e7ac kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:38.043453356 +0000 UTC m=+2.161920993,LastTimestamp:2026-03-13 12:36:38.043453356 +0000 UTC m=+2.161920993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.836880 master-0 kubenswrapper[3989]: E0313 12:36:49.836755 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d04b16606c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:38.076121196 +0000 UTC m=+2.194588833,LastTimestamp:2026-03-13 12:36:38.076121196 +0000 UTC m=+2.194588833,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.841571 master-0 kubenswrapper[3989]: E0313 12:36:49.841070 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d04c892a23 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:38.100421155 +0000 UTC m=+2.218888782,LastTimestamp:2026-03-13 12:36:38.100421155 +0000 UTC m=+2.218888782,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.845814 master-0 kubenswrapper[3989]: E0313 12:36:49.845362 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66d04d8b7ac7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:38.117350087 +0000 UTC m=+2.235817724,LastTimestamp:2026-03-13 12:36:38.117350087 +0000 UTC m=+2.235817724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.850209 master-0 kubenswrapper[3989]: E0313 12:36:49.850122 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d0b294ce44 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 1.769s (1.769s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:39.8124601 +0000 UTC m=+3.930927737,LastTimestamp:2026-03-13 12:36:39.8124601 +0000 UTC m=+3.930927737,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.854680 master-0 kubenswrapper[3989]: E0313 12:36:49.854593 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d0bef9101a openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:40.020357146 +0000 UTC m=+4.138824783,LastTimestamp:2026-03-13 12:36:40.020357146 +0000 UTC m=+4.138824783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.858905 master-0 kubenswrapper[3989]: E0313 12:36:49.858826 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d0bff99b2e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:40.037169966 +0000 UTC m=+4.155637603,LastTimestamp:2026-03-13 12:36:40.037169966 +0000 UTC m=+4.155637603,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.863346 master-0 kubenswrapper[3989]: E0313 12:36:49.863265 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d113dd7028 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 3.368s (3.368s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:41.444610088 +0000 UTC m=+5.563077725,LastTimestamp:2026-03-13 12:36:41.444610088 +0000 UTC m=+5.563077725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.869345 master-0 kubenswrapper[3989]: E0313 12:36:49.869171 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1361d9a68 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.019240552 +0000 UTC m=+6.137708189,LastTimestamp:2026-03-13 12:36:42.019240552 +0000 UTC m=+6.137708189,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.878567 master-0 kubenswrapper[3989]: E0313 12:36:49.878438 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d13b4fe261 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.106421857 +0000 UTC m=+6.224889494,LastTimestamp:2026-03-13 12:36:42.106421857 +0000 UTC m=+6.224889494,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.883861 master-0 kubenswrapper[3989]: E0313 12:36:49.883778 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d13c19c404 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.119652356 +0000 UTC m=+6.238119993,LastTimestamp:2026-03-13 12:36:42.119652356 +0000 UTC m=+6.238119993,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.888770 master-0 kubenswrapper[3989]: E0313 12:36:49.888561 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d13c704c26 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.125323302 +0000 UTC m=+6.243790939,LastTimestamp:2026-03-13 12:36:42.125323302 +0000 UTC m=+6.243790939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.916296 master-0 kubenswrapper[3989]: E0313 12:36:49.916139 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1431ff9ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.23749982 +0000 UTC m=+6.355967457,LastTimestamp:2026-03-13 12:36:42.23749982 +0000 UTC m=+6.355967457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.926303 master-0 kubenswrapper[3989]: E0313 12:36:49.926198 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1441a91a8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.253922728 +0000 UTC m=+6.372390365,LastTimestamp:2026-03-13 12:36:42.253922728 +0000 UTC m=+6.372390365,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.935941 master-0 kubenswrapper[3989]: E0313 12:36:49.935426 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d14c5211a3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.391777699 +0000 UTC m=+6.510245336,LastTimestamp:2026-03-13 12:36:42.391777699 +0000 UTC m=+6.510245336,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.941881 master-0 kubenswrapper[3989]: E0313 12:36:49.941748 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c66d14d382dac openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.406858156 +0000 UTC m=+6.525325793,LastTimestamp:2026-03-13 12:36:42.406858156 +0000 UTC m=+6.525325793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.947293 master-0 kubenswrapper[3989]: E0313 12:36:49.947136 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1361d9a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1361d9a68 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.019240552 +0000 UTC m=+6.137708189,LastTimestamp:2026-03-13 12:36:43.02349864 +0000 UTC m=+7.141966277,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.952846 master-0 kubenswrapper[3989]: E0313 12:36:49.952752 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1431ff9ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1431ff9ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.23749982 +0000 UTC m=+6.355967457,LastTimestamp:2026-03-13 12:36:43.375809461 +0000 UTC m=+7.494277098,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.957365 master-0 kubenswrapper[3989]: E0313 12:36:49.957252 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1441a91a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1441a91a8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.253922728 +0000 UTC m=+6.372390365,LastTimestamp:2026-03-13 12:36:43.392308051 +0000 UTC m=+7.510775688,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.963040 master-0 kubenswrapper[3989]: E0313 12:36:49.962902 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1ade5b7d6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:44.02884399 +0000 UTC m=+8.147311617,LastTimestamp:2026-03-13 12:36:44.02884399 +0000 UTC m=+8.147311617,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.967942 master-0 kubenswrapper[3989]: E0313 12:36:49.967850 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1ade5b7d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1ade5b7d6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:44.02884399 +0000 UTC m=+8.147311617,LastTimestamp:2026-03-13 12:36:45.031857788 +0000 UTC m=+9.150325425,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.972062 master-0 kubenswrapper[3989]: E0313 12:36:49.971926 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66d232c878b5 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 8.14s (8.14s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.258297013 +0000 UTC m=+10.376764650,LastTimestamp:2026-03-13 12:36:46.258297013 +0000 UTC m=+10.376764650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.976957 master-0 kubenswrapper[3989]: E0313 12:36:49.976884 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d233c20bd3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 8.174s (8.174s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.274653139 +0000 UTC m=+10.393120776,LastTimestamp:2026-03-13 12:36:46.274653139 +0000 UTC m=+10.393120776,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.981844 master-0 kubenswrapper[3989]: E0313 12:36:49.981768 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d246ee7fb6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.596333494 +0000 UTC m=+10.714801151,LastTimestamp:2026-03-13 12:36:46.596333494 +0000 UTC m=+10.714801151,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.986380 master-0 kubenswrapper[3989]: E0313 12:36:49.986219 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66d2470554f8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.59782988 +0000 UTC m=+10.716297517,LastTimestamp:2026-03-13 12:36:46.59782988 +0000 UTC m=+10.716297517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.989993 master-0 kubenswrapper[3989]: E0313 12:36:49.989940 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c66d2514c69a7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.770260391 +0000 UTC m=+10.888728028,LastTimestamp:2026-03-13 12:36:46.770260391 +0000 UTC m=+10.888728028,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.994202 master-0 kubenswrapper[3989]: E0313 12:36:49.994070 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d254a220c8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:46.82620948 +0000 UTC m=+10.944677117,LastTimestamp:2026-03-13 12:36:46.82620948 +0000 UTC m=+10.944677117,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:49.999238 master-0 kubenswrapper[3989]: E0313 12:36:49.999173 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d25ffc5a41 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 8.973s (8.973s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.016671809 +0000 UTC m=+11.135139446,LastTimestamp:2026-03-13 12:36:47.016671809 +0000 UTC m=+11.135139446,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.003251 master-0 kubenswrapper[3989]: E0313 12:36:50.003183 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d2614e82b6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.038833334 +0000 UTC m=+11.157300971,LastTimestamp:2026-03-13 12:36:47.038833334 +0000 UTC m=+11.157300971,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.007691 master-0 kubenswrapper[3989]: E0313 12:36:50.007549 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d27077a8c9 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.293188297 +0000 UTC m=+11.411655934,LastTimestamp:2026-03-13 12:36:47.293188297 +0000 UTC m=+11.411655934,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.011708 master-0 kubenswrapper[3989]: E0313 12:36:50.011602 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d2841d535c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.622812508 +0000 UTC m=+11.741280145,LastTimestamp:2026-03-13 12:36:47.622812508 +0000 UTC m=+11.741280145,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.015417 master-0 kubenswrapper[3989]: E0313 12:36:50.015134 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d2848a7807 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.629965319 +0000 UTC m=+11.748432956,LastTimestamp:2026-03-13 12:36:47.629965319 +0000 UTC m=+11.748432956,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.018455 master-0 kubenswrapper[3989]: E0313 12:36:50.018387 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d28499321b kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.630930459 +0000 UTC m=+11.749398096,LastTimestamp:2026-03-13 12:36:47.630930459 +0000 UTC m=+11.749398096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.021644 master-0 kubenswrapper[3989]: E0313 12:36:50.021549 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d29466cae7 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.896062695 +0000 UTC m=+12.014530332,LastTimestamp:2026-03-13 12:36:47.896062695 +0000 UTC m=+12.014530332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.025547 master-0 kubenswrapper[3989]: E0313 12:36:50.025465 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d29477ca04 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:47.89717658 +0000 UTC m=+12.015644207,LastTimestamp:2026-03-13 12:36:47.89717658 +0000 UTC m=+12.015644207,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:50.476851 master-0 kubenswrapper[3989]: I0313 12:36:50.476788 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:50.927825 master-0 kubenswrapper[3989]: I0313 12:36:50.927746 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 12:36:51.062623 master-0 kubenswrapper[3989]: I0313 12:36:51.062543 3989 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 12:36:51.478337 master-0 kubenswrapper[3989]: I0313 12:36:51.478250 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:51.758037 master-0 kubenswrapper[3989]: E0313 12:36:51.757901 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d37a4978e1 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 4.121s (4.121s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.752900833 +0000 UTC m=+15.871368470,LastTimestamp:2026-03-13 12:36:51.752900833 +0000 UTC m=+15.871368470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:51.768446 master-0 kubenswrapper[3989]: E0313 12:36:51.768312 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d37ae93206 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 3.866s (3.866s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.763368454 +0000 UTC m=+15.881836091,LastTimestamp:2026-03-13 12:36:51.763368454 +0000 UTC m=+15.881836091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:51.943326 master-0 kubenswrapper[3989]: E0313 12:36:51.943156 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d385590338 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.938468664 +0000 UTC m=+16.056936301,LastTimestamp:2026-03-13 12:36:51.938468664 +0000 UTC m=+16.056936301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:51.951122 master-0 kubenswrapper[3989]: E0313 12:36:51.950962 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d385bf0c8b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.945155723 +0000 UTC m=+16.063623360,LastTimestamp:2026-03-13 12:36:51.945155723 +0000 UTC m=+16.063623360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:51.987686 master-0 kubenswrapper[3989]: E0313 12:36:51.987472 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c66d387f8ecf7 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.982503159 +0000 UTC m=+16.100970796,LastTimestamp:2026-03-13 12:36:51.982503159 +0000 UTC m=+16.100970796,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:51.992006 master-0 kubenswrapper[3989]: E0313 12:36:51.991906 3989 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c66d3881ba1d8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:51.984777688 +0000 UTC m=+16.103245335,LastTimestamp:2026-03-13 12:36:51.984777688 +0000 UTC m=+16.103245335,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:52.058004 master-0 kubenswrapper[3989]: I0313 12:36:52.057843 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e"} Mar 13 12:36:52.058004 master-0 kubenswrapper[3989]: I0313 12:36:52.057880 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:52.058778 master-0 kubenswrapper[3989]: I0313 12:36:52.058752 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:52.058842 master-0 kubenswrapper[3989]: I0313 12:36:52.058790 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:52.058842 master-0 kubenswrapper[3989]: I0313 12:36:52.058801 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:52.060372 master-0 kubenswrapper[3989]: I0313 12:36:52.060332 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"63e03be6775769ad765af20dfd2ac68f1e500a160a4e77eda15bd7fdcfe1bc2a"} Mar 13 12:36:52.060439 master-0 kubenswrapper[3989]: I0313 12:36:52.060408 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:52.061345 master-0 kubenswrapper[3989]: I0313 12:36:52.061299 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:52.061406 master-0 kubenswrapper[3989]: I0313 12:36:52.061352 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:52.061406 master-0 kubenswrapper[3989]: I0313 12:36:52.061365 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:52.456909 master-0 kubenswrapper[3989]: W0313 12:36:52.456831 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 12:36:52.456909 master-0 kubenswrapper[3989]: E0313 12:36:52.456907 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 12:36:52.477918 master-0 kubenswrapper[3989]: I0313 12:36:52.477827 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:53.062784 master-0 kubenswrapper[3989]: I0313 12:36:53.062703 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:53.063413 master-0 kubenswrapper[3989]: I0313 12:36:53.062720 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:53.063413 master-0 kubenswrapper[3989]: I0313 12:36:53.063373 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:53.063413 master-0 kubenswrapper[3989]: I0313 12:36:53.063399 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:53.063413 master-0 kubenswrapper[3989]: I0313 12:36:53.063408 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:53.063743 master-0 kubenswrapper[3989]: I0313 12:36:53.063711 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:53.063840 master-0 kubenswrapper[3989]: I0313 12:36:53.063750 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:53.063840 master-0 kubenswrapper[3989]: I0313 12:36:53.063766 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:53.410492 master-0 kubenswrapper[3989]: I0313 12:36:53.410130 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:53.432252 master-0 kubenswrapper[3989]: W0313 12:36:53.432193 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 12:36:53.432252 master-0 kubenswrapper[3989]: E0313 12:36:53.432255 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 12:36:53.477364 master-0 kubenswrapper[3989]: I0313 12:36:53.477301 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:54.064712 master-0 kubenswrapper[3989]: I0313 12:36:54.064619 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:54.065732 master-0 kubenswrapper[3989]: I0313 12:36:54.065539 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:54.065732 master-0 kubenswrapper[3989]: I0313 12:36:54.065610 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:54.065732 master-0 kubenswrapper[3989]: I0313 12:36:54.065625 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:54.247052 master-0 kubenswrapper[3989]: W0313 12:36:54.246975 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:54.247052 master-0 kubenswrapper[3989]: E0313 12:36:54.247041 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 12:36:54.477547 master-0 kubenswrapper[3989]: I0313 12:36:54.477465 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:55.178230 master-0 kubenswrapper[3989]: I0313 12:36:55.178151 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:55.178842 master-0 kubenswrapper[3989]: I0313 12:36:55.178325 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:55.179350 master-0 kubenswrapper[3989]: I0313 12:36:55.179315 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:55.179350 master-0 kubenswrapper[3989]: I0313 12:36:55.179349 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:55.179461 master-0 kubenswrapper[3989]: I0313 12:36:55.179358 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:55.182725 master-0 kubenswrapper[3989]: I0313 12:36:55.182684 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:55.476604 master-0 kubenswrapper[3989]: I0313 12:36:55.476529 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:55.511843 master-0 kubenswrapper[3989]: W0313 12:36:55.511768 3989 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 12:36:55.511843 master-0 kubenswrapper[3989]: E0313 12:36:55.511816 3989 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 12:36:55.542567 master-0 kubenswrapper[3989]: I0313 12:36:55.542417 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:55.546480 master-0 kubenswrapper[3989]: I0313 12:36:55.546442 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:36:55.937659 master-0 kubenswrapper[3989]: I0313 12:36:55.937554 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:55.938702 master-0 kubenswrapper[3989]: I0313 12:36:55.938683 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:55.938803 master-0 kubenswrapper[3989]: I0313 12:36:55.938714 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:55.938803 master-0 kubenswrapper[3989]: I0313 12:36:55.938722 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:55.939090 master-0 kubenswrapper[3989]: I0313 12:36:55.939075 3989 scope.go:117] "RemoveContainer" containerID="d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf" Mar 13 12:36:55.950361 master-0 kubenswrapper[3989]: E0313 12:36:55.950214 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1361d9a68\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1361d9a68 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.019240552 +0000 UTC m=+6.137708189,LastTimestamp:2026-03-13 12:36:55.942946215 +0000 UTC m=+20.061413852,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:56.068823 master-0 kubenswrapper[3989]: I0313 12:36:56.068710 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:56.069522 master-0 kubenswrapper[3989]: I0313 12:36:56.069448 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:56.069522 master-0 kubenswrapper[3989]: I0313 12:36:56.069516 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:56.069522 master-0 kubenswrapper[3989]: I0313 12:36:56.069532 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:56.124720 master-0 kubenswrapper[3989]: E0313 12:36:56.124651 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:36:56.132887 master-0 kubenswrapper[3989]: E0313 12:36:56.132762 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1431ff9ac\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1431ff9ac openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.23749982 +0000 UTC m=+6.355967457,LastTimestamp:2026-03-13 12:36:56.128048312 +0000 UTC m=+20.246515949,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:56.144443 master-0 kubenswrapper[3989]: E0313 12:36:56.144329 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1441a91a8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1441a91a8 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:42.253922728 +0000 UTC m=+6.372390365,LastTimestamp:2026-03-13 12:36:56.139213943 +0000 UTC m=+20.257681580,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:56.479591 master-0 kubenswrapper[3989]: I0313 12:36:56.479508 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:56.525898 master-0 kubenswrapper[3989]: I0313 12:36:56.525842 3989 csr.go:261] certificate signing request csr-w27rh is approved, waiting to be issued Mar 13 12:36:56.746462 master-0 kubenswrapper[3989]: I0313 12:36:56.746195 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:56.747660 master-0 kubenswrapper[3989]: I0313 12:36:56.747624 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:56.747735 master-0 kubenswrapper[3989]: I0313 12:36:56.747675 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:56.747735 master-0 kubenswrapper[3989]: I0313 12:36:56.747686 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:56.747802 master-0 kubenswrapper[3989]: I0313 12:36:56.747782 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:36:56.753625 master-0 kubenswrapper[3989]: E0313 12:36:56.753515 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:36:56.991246 master-0 kubenswrapper[3989]: E0313 12:36:56.991092 3989 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:36:57.072435 master-0 kubenswrapper[3989]: I0313 12:36:57.072275 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:36:57.072939 master-0 kubenswrapper[3989]: I0313 12:36:57.072895 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 12:36:57.073305 master-0 kubenswrapper[3989]: I0313 12:36:57.073261 3989 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" exitCode=1 Mar 13 12:36:57.073367 master-0 kubenswrapper[3989]: I0313 12:36:57.073337 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66"} Mar 13 12:36:57.073401 master-0 kubenswrapper[3989]: I0313 12:36:57.073383 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:57.073401 master-0 kubenswrapper[3989]: I0313 12:36:57.073391 3989 scope.go:117] "RemoveContainer" containerID="d30d4f63f7cd58f4992a3085ac7040e2e62b00e72ffc9138e7116549180345bf" Mar 13 12:36:57.073541 master-0 kubenswrapper[3989]: I0313 12:36:57.073460 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:57.074199 master-0 kubenswrapper[3989]: I0313 12:36:57.074165 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:57.074199 master-0 kubenswrapper[3989]: I0313 12:36:57.074199 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:57.074286 master-0 kubenswrapper[3989]: I0313 12:36:57.074209 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:57.074286 master-0 kubenswrapper[3989]: I0313 12:36:57.074253 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:57.074344 master-0 kubenswrapper[3989]: I0313 12:36:57.074294 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:57.074344 master-0 kubenswrapper[3989]: I0313 12:36:57.074303 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:57.074554 master-0 kubenswrapper[3989]: I0313 12:36:57.074514 3989 scope.go:117] "RemoveContainer" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" Mar 13 12:36:57.075333 master-0 kubenswrapper[3989]: E0313 12:36:57.074746 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:36:57.080482 master-0 kubenswrapper[3989]: E0313 12:36:57.080278 3989 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c66d1ade5b7d6\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c66d1ade5b7d6 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:36:44.02884399 +0000 UTC m=+8.147311617,LastTimestamp:2026-03-13 12:36:57.074677992 +0000 UTC m=+21.193145619,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:36:57.478323 master-0 kubenswrapper[3989]: I0313 12:36:57.478251 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:57.795451 master-0 kubenswrapper[3989]: I0313 12:36:57.795258 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:57.795451 master-0 kubenswrapper[3989]: I0313 12:36:57.795473 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:57.796914 master-0 kubenswrapper[3989]: I0313 12:36:57.796890 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:57.797028 master-0 kubenswrapper[3989]: I0313 12:36:57.797016 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:57.797097 master-0 kubenswrapper[3989]: I0313 12:36:57.797086 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:58.077352 master-0 kubenswrapper[3989]: I0313 12:36:58.077179 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:36:58.476606 master-0 kubenswrapper[3989]: I0313 12:36:58.476545 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:59.478248 master-0 kubenswrapper[3989]: I0313 12:36:59.478151 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:36:59.718748 master-0 kubenswrapper[3989]: I0313 12:36:59.718644 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:36:59.718988 master-0 kubenswrapper[3989]: I0313 12:36:59.718883 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:36:59.719985 master-0 kubenswrapper[3989]: I0313 12:36:59.719957 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:36:59.720058 master-0 kubenswrapper[3989]: I0313 12:36:59.720000 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:36:59.720058 master-0 kubenswrapper[3989]: I0313 12:36:59.720009 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:36:59.723373 master-0 kubenswrapper[3989]: I0313 12:36:59.723345 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:00.082567 master-0 kubenswrapper[3989]: I0313 12:37:00.082528 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:00.083290 master-0 kubenswrapper[3989]: I0313 12:37:00.083257 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:00.083290 master-0 kubenswrapper[3989]: I0313 12:37:00.083292 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:00.083375 master-0 kubenswrapper[3989]: I0313 12:37:00.083301 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:00.086609 master-0 kubenswrapper[3989]: I0313 12:37:00.086557 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:00.177105 master-0 kubenswrapper[3989]: I0313 12:37:00.176996 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:00.181624 master-0 kubenswrapper[3989]: I0313 12:37:00.181555 3989 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:00.477510 master-0 kubenswrapper[3989]: I0313 12:37:00.477419 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:01.084654 master-0 kubenswrapper[3989]: I0313 12:37:01.084540 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:01.085656 master-0 kubenswrapper[3989]: I0313 12:37:01.085617 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:01.085737 master-0 kubenswrapper[3989]: I0313 12:37:01.085698 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:01.085737 master-0 kubenswrapper[3989]: I0313 12:37:01.085717 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:01.088860 master-0 kubenswrapper[3989]: I0313 12:37:01.088809 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:37:01.478112 master-0 kubenswrapper[3989]: I0313 12:37:01.478030 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:02.086919 master-0 kubenswrapper[3989]: I0313 12:37:02.086834 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:02.087866 master-0 kubenswrapper[3989]: I0313 12:37:02.087815 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:02.087866 master-0 kubenswrapper[3989]: I0313 12:37:02.087869 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:02.087961 master-0 kubenswrapper[3989]: I0313 12:37:02.087879 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:02.480327 master-0 kubenswrapper[3989]: I0313 12:37:02.480244 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:03.089620 master-0 kubenswrapper[3989]: I0313 12:37:03.089541 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:03.090813 master-0 kubenswrapper[3989]: I0313 12:37:03.090776 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:03.090887 master-0 kubenswrapper[3989]: I0313 12:37:03.090818 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:03.090887 master-0 kubenswrapper[3989]: I0313 12:37:03.090832 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:03.130447 master-0 kubenswrapper[3989]: E0313 12:37:03.130355 3989 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 12:37:03.476746 master-0 kubenswrapper[3989]: I0313 12:37:03.476687 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:03.754717 master-0 kubenswrapper[3989]: I0313 12:37:03.754543 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:03.755552 master-0 kubenswrapper[3989]: I0313 12:37:03.755529 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:03.756178 master-0 kubenswrapper[3989]: I0313 12:37:03.755702 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:03.756178 master-0 kubenswrapper[3989]: I0313 12:37:03.755722 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:03.756178 master-0 kubenswrapper[3989]: I0313 12:37:03.755778 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:37:03.760550 master-0 kubenswrapper[3989]: E0313 12:37:03.760494 3989 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 12:37:04.476535 master-0 kubenswrapper[3989]: I0313 12:37:04.476478 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:05.476645 master-0 kubenswrapper[3989]: I0313 12:37:05.476559 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:06.479075 master-0 kubenswrapper[3989]: I0313 12:37:06.479000 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:06.991396 master-0 kubenswrapper[3989]: E0313 12:37:06.991280 3989 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:37:07.477542 master-0 kubenswrapper[3989]: I0313 12:37:07.477430 3989 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 12:37:07.559911 master-0 kubenswrapper[3989]: I0313 12:37:07.559820 3989 csr.go:257] certificate signing request csr-w27rh is issued Mar 13 12:37:08.359598 master-0 kubenswrapper[3989]: I0313 12:37:08.359308 3989 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 12:37:08.482070 master-0 kubenswrapper[3989]: I0313 12:37:08.482001 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:08.496512 master-0 kubenswrapper[3989]: I0313 12:37:08.496454 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:08.553889 master-0 kubenswrapper[3989]: I0313 12:37:08.553815 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:08.562063 master-0 kubenswrapper[3989]: I0313 12:37:08.561975 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 05:43:15.837383036 +0000 UTC Mar 13 12:37:08.562063 master-0 kubenswrapper[3989]: I0313 12:37:08.562053 3989 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h6m7.275334308s for next certificate rotation Mar 13 12:37:09.019197 master-0 kubenswrapper[3989]: I0313 12:37:09.016310 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.019197 master-0 kubenswrapper[3989]: E0313 12:37:09.016374 3989 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:37:09.038450 master-0 kubenswrapper[3989]: I0313 12:37:09.038304 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.056663 master-0 kubenswrapper[3989]: I0313 12:37:09.056154 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.118132 master-0 kubenswrapper[3989]: I0313 12:37:09.118050 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.391323 master-0 kubenswrapper[3989]: I0313 12:37:09.391251 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.391323 master-0 kubenswrapper[3989]: E0313 12:37:09.391294 3989 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:37:09.489620 master-0 kubenswrapper[3989]: I0313 12:37:09.489548 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.505402 master-0 kubenswrapper[3989]: I0313 12:37:09.505355 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.562609 master-0 kubenswrapper[3989]: I0313 12:37:09.562531 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.830650 master-0 kubenswrapper[3989]: I0313 12:37:09.830494 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:09.830650 master-0 kubenswrapper[3989]: E0313 12:37:09.830568 3989 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:37:10.137163 master-0 kubenswrapper[3989]: E0313 12:37:10.137030 3989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 13 12:37:10.381534 master-0 kubenswrapper[3989]: I0313 12:37:10.381433 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:10.400329 master-0 kubenswrapper[3989]: I0313 12:37:10.400170 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:10.458717 master-0 kubenswrapper[3989]: I0313 12:37:10.458550 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:10.737927 master-0 kubenswrapper[3989]: I0313 12:37:10.737659 3989 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 12:37:10.737927 master-0 kubenswrapper[3989]: E0313 12:37:10.737719 3989 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 12:37:10.761057 master-0 kubenswrapper[3989]: I0313 12:37:10.760881 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:10.762517 master-0 kubenswrapper[3989]: I0313 12:37:10.762460 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:10.762517 master-0 kubenswrapper[3989]: I0313 12:37:10.762504 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:10.762517 master-0 kubenswrapper[3989]: I0313 12:37:10.762513 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:10.762716 master-0 kubenswrapper[3989]: I0313 12:37:10.762612 3989 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:37:10.772252 master-0 kubenswrapper[3989]: I0313 12:37:10.772205 3989 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:37:10.772252 master-0 kubenswrapper[3989]: E0313 12:37:10.772251 3989 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 12:37:10.784684 master-0 kubenswrapper[3989]: E0313 12:37:10.784599 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:10.885071 master-0 kubenswrapper[3989]: E0313 12:37:10.884989 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:10.937797 master-0 kubenswrapper[3989]: I0313 12:37:10.937709 3989 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:37:10.938842 master-0 kubenswrapper[3989]: I0313 12:37:10.938810 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:37:10.938842 master-0 kubenswrapper[3989]: I0313 12:37:10.938842 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:37:10.938955 master-0 kubenswrapper[3989]: I0313 12:37:10.938853 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:37:10.939297 master-0 kubenswrapper[3989]: I0313 12:37:10.939262 3989 scope.go:117] "RemoveContainer" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" Mar 13 12:37:10.939504 master-0 kubenswrapper[3989]: E0313 12:37:10.939440 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 12:37:11.237274 master-0 kubenswrapper[3989]: I0313 12:37:11.236505 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 12:37:11.237274 master-0 kubenswrapper[3989]: E0313 12:37:11.237161 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.252945 master-0 kubenswrapper[3989]: I0313 12:37:11.252718 3989 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 12:37:11.326693 master-0 kubenswrapper[3989]: I0313 12:37:11.326492 3989 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:11.337895 master-0 kubenswrapper[3989]: E0313 12:37:11.337856 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.438203 master-0 kubenswrapper[3989]: E0313 12:37:11.438141 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.539302 master-0 kubenswrapper[3989]: E0313 12:37:11.539103 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.639866 master-0 kubenswrapper[3989]: E0313 12:37:11.639807 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.740344 master-0 kubenswrapper[3989]: E0313 12:37:11.740276 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.840829 master-0 kubenswrapper[3989]: E0313 12:37:11.840680 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:11.941910 master-0 kubenswrapper[3989]: E0313 12:37:11.941817 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.042347 master-0 kubenswrapper[3989]: E0313 12:37:12.042254 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.143274 master-0 kubenswrapper[3989]: E0313 12:37:12.143197 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.244177 master-0 kubenswrapper[3989]: E0313 12:37:12.244124 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.344728 master-0 kubenswrapper[3989]: E0313 12:37:12.344634 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.445530 master-0 kubenswrapper[3989]: E0313 12:37:12.445346 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.546255 master-0 kubenswrapper[3989]: E0313 12:37:12.546177 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.646824 master-0 kubenswrapper[3989]: E0313 12:37:12.646756 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.747591 master-0 kubenswrapper[3989]: E0313 12:37:12.747414 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.847936 master-0 kubenswrapper[3989]: E0313 12:37:12.847872 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:12.948919 master-0 kubenswrapper[3989]: E0313 12:37:12.948844 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:13.049794 master-0 kubenswrapper[3989]: E0313 12:37:13.049647 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:13.150671 master-0 kubenswrapper[3989]: E0313 12:37:13.150551 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:13.251695 master-0 kubenswrapper[3989]: E0313 12:37:13.251470 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:13.352282 master-0 kubenswrapper[3989]: E0313 12:37:13.352134 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.209886 master-0 kubenswrapper[3989]: E0313 12:37:14.209798 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.310115 master-0 kubenswrapper[3989]: E0313 12:37:14.310014 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.410900 master-0 kubenswrapper[3989]: E0313 12:37:14.410782 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.512136 master-0 kubenswrapper[3989]: E0313 12:37:14.511820 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.613071 master-0 kubenswrapper[3989]: E0313 12:37:14.612917 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.713423 master-0 kubenswrapper[3989]: E0313 12:37:14.713262 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.814287 master-0 kubenswrapper[3989]: E0313 12:37:14.814082 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:14.915115 master-0 kubenswrapper[3989]: E0313 12:37:14.915013 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.035610 master-0 kubenswrapper[3989]: E0313 12:37:15.035351 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.146066 master-0 kubenswrapper[3989]: E0313 12:37:15.145990 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.442304 master-0 kubenswrapper[3989]: E0313 12:37:15.442006 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.453762 master-0 kubenswrapper[3989]: I0313 12:37:15.453681 3989 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:15.542386 master-0 kubenswrapper[3989]: E0313 12:37:15.542287 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.642795 master-0 kubenswrapper[3989]: E0313 12:37:15.642672 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.743459 master-0 kubenswrapper[3989]: E0313 12:37:15.743288 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.844208 master-0 kubenswrapper[3989]: E0313 12:37:15.844123 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:15.944355 master-0 kubenswrapper[3989]: E0313 12:37:15.944283 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.044622 master-0 kubenswrapper[3989]: E0313 12:37:16.044451 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.144866 master-0 kubenswrapper[3989]: E0313 12:37:16.144806 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.245503 master-0 kubenswrapper[3989]: E0313 12:37:16.245433 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.345746 master-0 kubenswrapper[3989]: E0313 12:37:16.345607 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.446125 master-0 kubenswrapper[3989]: E0313 12:37:16.446068 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.546640 master-0 kubenswrapper[3989]: E0313 12:37:16.546544 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.647266 master-0 kubenswrapper[3989]: E0313 12:37:16.647201 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.748123 master-0 kubenswrapper[3989]: E0313 12:37:16.748033 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.849042 master-0 kubenswrapper[3989]: E0313 12:37:16.848981 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.950255 master-0 kubenswrapper[3989]: E0313 12:37:16.950085 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:16.991550 master-0 kubenswrapper[3989]: E0313 12:37:16.991478 3989 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:37:17.050861 master-0 kubenswrapper[3989]: E0313 12:37:17.050791 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.151072 master-0 kubenswrapper[3989]: E0313 12:37:17.151010 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.251624 master-0 kubenswrapper[3989]: E0313 12:37:17.251453 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.309819 master-0 kubenswrapper[3989]: I0313 12:37:17.309750 3989 csr.go:261] certificate signing request csr-52hft is approved, waiting to be issued Mar 13 12:37:17.352236 master-0 kubenswrapper[3989]: E0313 12:37:17.352155 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.364420 master-0 kubenswrapper[3989]: I0313 12:37:17.364359 3989 csr.go:257] certificate signing request csr-52hft is issued Mar 13 12:37:17.454273 master-0 kubenswrapper[3989]: E0313 12:37:17.454221 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.555477 master-0 kubenswrapper[3989]: E0313 12:37:17.555296 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.656261 master-0 kubenswrapper[3989]: E0313 12:37:17.656165 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.756885 master-0 kubenswrapper[3989]: E0313 12:37:17.756802 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.857163 master-0 kubenswrapper[3989]: E0313 12:37:17.856899 3989 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:37:17.919817 master-0 kubenswrapper[3989]: I0313 12:37:17.919759 3989 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:18.366436 master-0 kubenswrapper[3989]: I0313 12:37:18.366303 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 09:46:25.312095619 +0000 UTC Mar 13 12:37:18.366436 master-0 kubenswrapper[3989]: I0313 12:37:18.366376 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h9m6.945722616s for next certificate rotation Mar 13 12:37:18.444823 master-0 kubenswrapper[3989]: I0313 12:37:18.444738 3989 apiserver.go:52] "Watching apiserver" Mar 13 12:37:18.449632 master-0 kubenswrapper[3989]: I0313 12:37:18.449588 3989 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:37:18.450039 master-0 kubenswrapper[3989]: I0313 12:37:18.449963 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg","openshift-network-operator/network-operator-7c649bf6d4-fcthv","assisted-installer/assisted-installer-controller-7vm6x"] Mar 13 12:37:18.450749 master-0 kubenswrapper[3989]: I0313 12:37:18.450720 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.452128 master-0 kubenswrapper[3989]: I0313 12:37:18.451867 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.452128 master-0 kubenswrapper[3989]: I0313 12:37:18.451947 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.452842 master-0 kubenswrapper[3989]: I0313 12:37:18.452810 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 13 12:37:18.453005 master-0 kubenswrapper[3989]: I0313 12:37:18.452966 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 13 12:37:18.453420 master-0 kubenswrapper[3989]: I0313 12:37:18.453355 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:37:18.453656 master-0 kubenswrapper[3989]: I0313 12:37:18.453609 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 13 12:37:18.453947 master-0 kubenswrapper[3989]: I0313 12:37:18.453874 3989 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 13 12:37:18.454677 master-0 kubenswrapper[3989]: I0313 12:37:18.454659 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:37:18.455079 master-0 kubenswrapper[3989]: I0313 12:37:18.454742 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:37:18.455079 master-0 kubenswrapper[3989]: I0313 12:37:18.454773 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:37:18.455079 master-0 kubenswrapper[3989]: I0313 12:37:18.454895 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:37:18.456059 master-0 kubenswrapper[3989]: I0313 12:37:18.456032 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:37:18.475156 master-0 kubenswrapper[3989]: I0313 12:37:18.475083 3989 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:37:18.560947 master-0 kubenswrapper[3989]: I0313 12:37:18.560862 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.560957 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.561019 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.561049 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.561183 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.561263 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.561358 master-0 kubenswrapper[3989]: I0313 12:37:18.561320 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561382 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9kmk\" (UniqueName: \"kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561408 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561458 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561497 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561544 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.561761 master-0 kubenswrapper[3989]: I0313 12:37:18.561602 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.662106 master-0 kubenswrapper[3989]: I0313 12:37:18.661994 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.662106 master-0 kubenswrapper[3989]: I0313 12:37:18.662064 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.662822 master-0 kubenswrapper[3989]: I0313 12:37:18.662345 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9kmk\" (UniqueName: \"kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.662822 master-0 kubenswrapper[3989]: I0313 12:37:18.662458 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.662822 master-0 kubenswrapper[3989]: I0313 12:37:18.662519 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.662822 master-0 kubenswrapper[3989]: I0313 12:37:18.662626 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.662822 master-0 kubenswrapper[3989]: I0313 12:37:18.662681 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.662812 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.662855 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663015 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663065 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663101 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663132 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663161 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663188 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663243 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663280 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663309 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: E0313 12:37:18.663401 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663477 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: E0313 12:37:18.663628 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:19.163464975 +0000 UTC m=+43.281932612 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.663613 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.664454 master-0 kubenswrapper[3989]: I0313 12:37:18.664358 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.668169 master-0 kubenswrapper[3989]: I0313 12:37:18.665449 3989 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:37:18.674539 master-0 kubenswrapper[3989]: I0313 12:37:18.674463 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.679660 master-0 kubenswrapper[3989]: I0313 12:37:18.679631 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.683162 master-0 kubenswrapper[3989]: I0313 12:37:18.683130 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:18.693099 master-0 kubenswrapper[3989]: I0313 12:37:18.693027 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9kmk\" (UniqueName: \"kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk\") pod \"assisted-installer-controller-7vm6x\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.796280 master-0 kubenswrapper[3989]: I0313 12:37:18.796176 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:18.814008 master-0 kubenswrapper[3989]: I0313 12:37:18.813012 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:37:18.898612 master-0 kubenswrapper[3989]: I0313 12:37:18.889986 3989 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:37:19.182377 master-0 kubenswrapper[3989]: I0313 12:37:19.182272 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:19.182829 master-0 kubenswrapper[3989]: E0313 12:37:19.182497 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:19.182829 master-0 kubenswrapper[3989]: E0313 12:37:19.182668 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:20.182646651 +0000 UTC m=+44.301114288 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:19.366988 master-0 kubenswrapper[3989]: I0313 12:37:19.366887 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 06:17:50.827982776 +0000 UTC Mar 13 12:37:19.366988 master-0 kubenswrapper[3989]: I0313 12:37:19.366941 3989 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 17h40m31.461043789s for next certificate rotation Mar 13 12:37:19.455917 master-0 kubenswrapper[3989]: I0313 12:37:19.455791 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" event={"ID":"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9","Type":"ContainerStarted","Data":"c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2"} Mar 13 12:37:19.456964 master-0 kubenswrapper[3989]: I0313 12:37:19.456925 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7vm6x" event={"ID":"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132","Type":"ContainerStarted","Data":"2c01927a76a297da5840d73eff9921d3c26cf5f0e7c0b06e61b8b4a6964b05b8"} Mar 13 12:37:20.202994 master-0 kubenswrapper[3989]: I0313 12:37:20.202907 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:20.203367 master-0 kubenswrapper[3989]: E0313 12:37:20.203096 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:20.203367 master-0 kubenswrapper[3989]: E0313 12:37:20.203205 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:22.203183597 +0000 UTC m=+46.321651234 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:22.045175 master-0 kubenswrapper[3989]: I0313 12:37:22.044088 3989 scope.go:117] "RemoveContainer" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" Mar 13 12:37:22.045175 master-0 kubenswrapper[3989]: I0313 12:37:22.044451 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 13 12:37:22.230931 master-0 kubenswrapper[3989]: I0313 12:37:22.230640 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:22.230931 master-0 kubenswrapper[3989]: E0313 12:37:22.230885 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:22.231104 master-0 kubenswrapper[3989]: E0313 12:37:22.230981 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:26.230964402 +0000 UTC m=+50.349432039 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:23.583660 master-0 kubenswrapper[3989]: I0313 12:37:23.583464 3989 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:37:23.584378 master-0 kubenswrapper[3989]: I0313 12:37:23.583960 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"ed78e1786123e1fdf666e037202049096483e9131a9b2ba5d12c1d669373c1fa"} Mar 13 12:37:23.599768 master-0 kubenswrapper[3989]: I0313 12:37:23.599535 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=2.59950569 podStartE2EDuration="2.59950569s" podCreationTimestamp="2026-03-13 12:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:37:23.59887546 +0000 UTC m=+47.717343097" watchObservedRunningTime="2026-03-13 12:37:23.59950569 +0000 UTC m=+47.717973327" Mar 13 12:37:26.314096 master-0 kubenswrapper[3989]: I0313 12:37:26.314023 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:26.314774 master-0 kubenswrapper[3989]: E0313 12:37:26.314271 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:26.314774 master-0 kubenswrapper[3989]: E0313 12:37:26.314382 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:34.314356014 +0000 UTC m=+58.432823651 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:26.591404 master-0 kubenswrapper[3989]: I0313 12:37:26.591352 3989 generic.go:334] "Generic (PLEG): container finished" podID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerID="3d699a661192c0fe629e3652881a79b8980021e82a7bc93d27f3ce7bd63fd41d" exitCode=0 Mar 13 12:37:26.591404 master-0 kubenswrapper[3989]: I0313 12:37:26.591402 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7vm6x" event={"ID":"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132","Type":"ContainerDied","Data":"3d699a661192c0fe629e3652881a79b8980021e82a7bc93d27f3ce7bd63fd41d"} Mar 13 12:37:27.640035 master-0 kubenswrapper[3989]: I0313 12:37:27.639940 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" event={"ID":"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9","Type":"ContainerStarted","Data":"c15cc561a2dc2cb30249635a38f6de933793bd539f9b4fe8d60280e00e99d819"} Mar 13 12:37:27.655996 master-0 kubenswrapper[3989]: I0313 12:37:27.655937 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:27.682081 master-0 kubenswrapper[3989]: I0313 12:37:27.681988 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" podStartSLOduration=9.16508397 podStartE2EDuration="16.681959364s" podCreationTimestamp="2026-03-13 12:37:11 +0000 UTC" firstStartedPulling="2026-03-13 12:37:18.830255325 +0000 UTC m=+42.948722982" lastFinishedPulling="2026-03-13 12:37:26.347130739 +0000 UTC m=+50.465598376" observedRunningTime="2026-03-13 12:37:27.663417545 +0000 UTC m=+51.781885212" watchObservedRunningTime="2026-03-13 12:37:27.681959364 +0000 UTC m=+51.800427001" Mar 13 12:37:27.726222 master-0 kubenswrapper[3989]: I0313 12:37:27.726144 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9kmk\" (UniqueName: \"kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk\") pod \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " Mar 13 12:37:27.726222 master-0 kubenswrapper[3989]: I0313 12:37:27.726194 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf\") pod \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " Mar 13 12:37:27.726222 master-0 kubenswrapper[3989]: I0313 12:37:27.726210 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files\") pod \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " Mar 13 12:37:27.726222 master-0 kubenswrapper[3989]: I0313 12:37:27.726227 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf\") pod \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " Mar 13 12:37:27.726610 master-0 kubenswrapper[3989]: I0313 12:37:27.726242 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle\") pod \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\" (UID: \"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132\") " Mar 13 12:37:27.726610 master-0 kubenswrapper[3989]: I0313 12:37:27.726373 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" (UID: "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:37:27.726610 master-0 kubenswrapper[3989]: I0313 12:37:27.726415 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" (UID: "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:37:27.727243 master-0 kubenswrapper[3989]: I0313 12:37:27.727154 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" (UID: "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:37:27.727607 master-0 kubenswrapper[3989]: I0313 12:37:27.727559 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" (UID: "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:37:27.730460 master-0 kubenswrapper[3989]: I0313 12:37:27.730420 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk" (OuterVolumeSpecName: "kube-api-access-w9kmk") pod "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" (UID: "2352a350-0a7c-4fcd-ba8f-ee9a4c80b132"). InnerVolumeSpecName "kube-api-access-w9kmk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:37:27.827089 master-0 kubenswrapper[3989]: I0313 12:37:27.826997 3989 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9kmk\" (UniqueName: \"kubernetes.io/projected/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-kube-api-access-w9kmk\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:27.827089 master-0 kubenswrapper[3989]: I0313 12:37:27.827042 3989 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:27.827089 master-0 kubenswrapper[3989]: I0313 12:37:27.827052 3989 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:27.827089 master-0 kubenswrapper[3989]: I0313 12:37:27.827062 3989 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:27.827089 master-0 kubenswrapper[3989]: I0313 12:37:27.827070 3989 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/2352a350-0a7c-4fcd-ba8f-ee9a4c80b132-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:28.645048 master-0 kubenswrapper[3989]: I0313 12:37:28.644921 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-7vm6x" event={"ID":"2352a350-0a7c-4fcd-ba8f-ee9a4c80b132","Type":"ContainerDied","Data":"2c01927a76a297da5840d73eff9921d3c26cf5f0e7c0b06e61b8b4a6964b05b8"} Mar 13 12:37:28.645048 master-0 kubenswrapper[3989]: I0313 12:37:28.644991 3989 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c01927a76a297da5840d73eff9921d3c26cf5f0e7c0b06e61b8b4a6964b05b8" Mar 13 12:37:28.645783 master-0 kubenswrapper[3989]: I0313 12:37:28.645074 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:37:29.313272 master-0 kubenswrapper[3989]: I0313 12:37:29.313200 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-vkr5w"] Mar 13 12:37:29.313556 master-0 kubenswrapper[3989]: E0313 12:37:29.313368 3989 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:37:29.313556 master-0 kubenswrapper[3989]: I0313 12:37:29.313400 3989 state_mem.go:107] "Deleted CPUSet assignment" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:37:29.313556 master-0 kubenswrapper[3989]: I0313 12:37:29.313446 3989 memory_manager.go:354] "RemoveStaleState removing state" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:37:29.313746 master-0 kubenswrapper[3989]: I0313 12:37:29.313658 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:29.439119 master-0 kubenswrapper[3989]: I0313 12:37:29.439058 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2wh6\" (UniqueName: \"kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6\") pod \"mtu-prober-vkr5w\" (UID: \"0b19a429-6a4f-4f90-9901-417fe8921ccc\") " pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:29.540164 master-0 kubenswrapper[3989]: I0313 12:37:29.540097 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2wh6\" (UniqueName: \"kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6\") pod \"mtu-prober-vkr5w\" (UID: \"0b19a429-6a4f-4f90-9901-417fe8921ccc\") " pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:29.561877 master-0 kubenswrapper[3989]: I0313 12:37:29.561830 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2wh6\" (UniqueName: \"kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6\") pod \"mtu-prober-vkr5w\" (UID: \"0b19a429-6a4f-4f90-9901-417fe8921ccc\") " pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:29.624357 master-0 kubenswrapper[3989]: I0313 12:37:29.624172 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:29.648038 master-0 kubenswrapper[3989]: I0313 12:37:29.647968 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-vkr5w" event={"ID":"0b19a429-6a4f-4f90-9901-417fe8921ccc","Type":"ContainerStarted","Data":"ba036bffe5621c444daa0bd1c229eac4d583082b0f6956a7bb655f8664a38947"} Mar 13 12:37:30.652994 master-0 kubenswrapper[3989]: I0313 12:37:30.652949 3989 generic.go:334] "Generic (PLEG): container finished" podID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerID="9f4ddd8b81aa8e6f6453e9d79c9c9826152b36b58f325733cabc91a77b93f83c" exitCode=0 Mar 13 12:37:30.653532 master-0 kubenswrapper[3989]: I0313 12:37:30.653011 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-vkr5w" event={"ID":"0b19a429-6a4f-4f90-9901-417fe8921ccc","Type":"ContainerDied","Data":"9f4ddd8b81aa8e6f6453e9d79c9c9826152b36b58f325733cabc91a77b93f83c"} Mar 13 12:37:31.673924 master-0 kubenswrapper[3989]: I0313 12:37:31.673875 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:31.856628 master-0 kubenswrapper[3989]: I0313 12:37:31.856540 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2wh6\" (UniqueName: \"kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6\") pod \"0b19a429-6a4f-4f90-9901-417fe8921ccc\" (UID: \"0b19a429-6a4f-4f90-9901-417fe8921ccc\") " Mar 13 12:37:31.860945 master-0 kubenswrapper[3989]: I0313 12:37:31.860910 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6" (OuterVolumeSpecName: "kube-api-access-t2wh6") pod "0b19a429-6a4f-4f90-9901-417fe8921ccc" (UID: "0b19a429-6a4f-4f90-9901-417fe8921ccc"). InnerVolumeSpecName "kube-api-access-t2wh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:37:31.957536 master-0 kubenswrapper[3989]: I0313 12:37:31.957335 3989 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2wh6\" (UniqueName: \"kubernetes.io/projected/0b19a429-6a4f-4f90-9901-417fe8921ccc-kube-api-access-t2wh6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:37:32.660121 master-0 kubenswrapper[3989]: I0313 12:37:32.660078 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-vkr5w" event={"ID":"0b19a429-6a4f-4f90-9901-417fe8921ccc","Type":"ContainerDied","Data":"ba036bffe5621c444daa0bd1c229eac4d583082b0f6956a7bb655f8664a38947"} Mar 13 12:37:32.660383 master-0 kubenswrapper[3989]: I0313 12:37:32.660369 3989 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba036bffe5621c444daa0bd1c229eac4d583082b0f6956a7bb655f8664a38947" Mar 13 12:37:32.660483 master-0 kubenswrapper[3989]: I0313 12:37:32.660160 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-vkr5w" Mar 13 12:37:34.321334 master-0 kubenswrapper[3989]: I0313 12:37:34.321277 3989 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-vkr5w"] Mar 13 12:37:34.331621 master-0 kubenswrapper[3989]: I0313 12:37:34.331556 3989 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-vkr5w"] Mar 13 12:37:34.373973 master-0 kubenswrapper[3989]: I0313 12:37:34.373925 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:34.374356 master-0 kubenswrapper[3989]: E0313 12:37:34.374299 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:34.374459 master-0 kubenswrapper[3989]: E0313 12:37:34.374418 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:50.374390569 +0000 UTC m=+74.492858206 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:34.940798 master-0 kubenswrapper[3989]: I0313 12:37:34.940762 3989 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" path="/var/lib/kubelet/pods/0b19a429-6a4f-4f90-9901-417fe8921ccc/volumes" Mar 13 12:37:39.198276 master-0 kubenswrapper[3989]: I0313 12:37:39.198183 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-6c7r9"] Mar 13 12:37:39.199188 master-0 kubenswrapper[3989]: E0313 12:37:39.198363 3989 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:37:39.199188 master-0 kubenswrapper[3989]: I0313 12:37:39.198385 3989 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:37:39.199188 master-0 kubenswrapper[3989]: I0313 12:37:39.198446 3989 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:37:39.199188 master-0 kubenswrapper[3989]: I0313 12:37:39.198913 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.201651 master-0 kubenswrapper[3989]: I0313 12:37:39.201612 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:37:39.202663 master-0 kubenswrapper[3989]: I0313 12:37:39.202608 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:37:39.203002 master-0 kubenswrapper[3989]: I0313 12:37:39.202800 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:37:39.203002 master-0 kubenswrapper[3989]: I0313 12:37:39.202853 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:37:39.303881 master-0 kubenswrapper[3989]: I0313 12:37:39.303808 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.303881 master-0 kubenswrapper[3989]: I0313 12:37:39.303894 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.303925 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.303964 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.303988 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.304012 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.304061 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.304081 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.304153 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304226 master-0 kubenswrapper[3989]: I0313 12:37:39.304190 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304255 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304309 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304346 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304366 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304409 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304434 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.304677 master-0 kubenswrapper[3989]: I0313 12:37:39.304472 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.389073 master-0 kubenswrapper[3989]: I0313 12:37:39.389005 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-wl6w4"] Mar 13 12:37:39.389822 master-0 kubenswrapper[3989]: I0313 12:37:39.389782 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.392794 master-0 kubenswrapper[3989]: I0313 12:37:39.392747 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:37:39.392924 master-0 kubenswrapper[3989]: I0313 12:37:39.392876 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:37:39.404774 master-0 kubenswrapper[3989]: I0313 12:37:39.404731 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.404774 master-0 kubenswrapper[3989]: I0313 12:37:39.404772 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.404929 master-0 kubenswrapper[3989]: I0313 12:37:39.404790 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.404929 master-0 kubenswrapper[3989]: I0313 12:37:39.404808 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.404929 master-0 kubenswrapper[3989]: I0313 12:37:39.404824 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405080 master-0 kubenswrapper[3989]: I0313 12:37:39.405018 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405180 master-0 kubenswrapper[3989]: I0313 12:37:39.405150 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405273 master-0 kubenswrapper[3989]: I0313 12:37:39.405201 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405273 master-0 kubenswrapper[3989]: I0313 12:37:39.405233 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405273 master-0 kubenswrapper[3989]: I0313 12:37:39.405255 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405408 master-0 kubenswrapper[3989]: I0313 12:37:39.405282 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405408 master-0 kubenswrapper[3989]: I0313 12:37:39.405306 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405408 master-0 kubenswrapper[3989]: I0313 12:37:39.405339 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405408 master-0 kubenswrapper[3989]: I0313 12:37:39.405368 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405658 master-0 kubenswrapper[3989]: I0313 12:37:39.405460 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405658 master-0 kubenswrapper[3989]: I0313 12:37:39.405506 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405658 master-0 kubenswrapper[3989]: I0313 12:37:39.405535 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405658 master-0 kubenswrapper[3989]: I0313 12:37:39.405562 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405807 master-0 kubenswrapper[3989]: I0313 12:37:39.405724 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405807 master-0 kubenswrapper[3989]: I0313 12:37:39.405747 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.405961 master-0 kubenswrapper[3989]: I0313 12:37:39.405931 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406027 master-0 kubenswrapper[3989]: I0313 12:37:39.405964 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406086 master-0 kubenswrapper[3989]: I0313 12:37:39.406070 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406144 master-0 kubenswrapper[3989]: I0313 12:37:39.406115 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406214 master-0 kubenswrapper[3989]: I0313 12:37:39.406193 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406415 master-0 kubenswrapper[3989]: I0313 12:37:39.406389 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406477 master-0 kubenswrapper[3989]: I0313 12:37:39.406448 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406522 master-0 kubenswrapper[3989]: I0313 12:37:39.406481 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406522 master-0 kubenswrapper[3989]: I0313 12:37:39.406504 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406737 master-0 kubenswrapper[3989]: I0313 12:37:39.406711 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406785 master-0 kubenswrapper[3989]: I0313 12:37:39.406737 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406785 master-0 kubenswrapper[3989]: I0313 12:37:39.406750 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.406852 master-0 kubenswrapper[3989]: I0313 12:37:39.406710 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.423605 master-0 kubenswrapper[3989]: I0313 12:37:39.423516 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506329 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506389 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506416 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506435 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506453 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.506486 master-0 kubenswrapper[3989]: I0313 12:37:39.506490 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.507101 master-0 kubenswrapper[3989]: I0313 12:37:39.506531 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.507101 master-0 kubenswrapper[3989]: I0313 12:37:39.506550 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.520463 master-0 kubenswrapper[3989]: I0313 12:37:39.519689 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-6c7r9" Mar 13 12:37:39.607806 master-0 kubenswrapper[3989]: I0313 12:37:39.607718 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.607806 master-0 kubenswrapper[3989]: I0313 12:37:39.607798 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.607898 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.607936 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.607969 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.607910 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.608010 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608169 master-0 kubenswrapper[3989]: I0313 12:37:39.608072 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608408 master-0 kubenswrapper[3989]: I0313 12:37:39.608207 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608408 master-0 kubenswrapper[3989]: I0313 12:37:39.608234 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608408 master-0 kubenswrapper[3989]: I0313 12:37:39.608295 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.608547 master-0 kubenswrapper[3989]: I0313 12:37:39.608448 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.610645 master-0 kubenswrapper[3989]: I0313 12:37:39.609100 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.610645 master-0 kubenswrapper[3989]: I0313 12:37:39.609254 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.610645 master-0 kubenswrapper[3989]: I0313 12:37:39.609342 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.630698 master-0 kubenswrapper[3989]: I0313 12:37:39.628373 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.677682 master-0 kubenswrapper[3989]: I0313 12:37:39.677521 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6c7r9" event={"ID":"ffcc3a23-d81c-4064-a24a-857dbe3222c8","Type":"ContainerStarted","Data":"c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687"} Mar 13 12:37:39.705109 master-0 kubenswrapper[3989]: I0313 12:37:39.704964 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:37:39.717526 master-0 kubenswrapper[3989]: W0313 12:37:39.717407 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6d1a0616_4479_4621_b042_36a586bd8248.slice/crio-a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e WatchSource:0}: Error finding container a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e: Status 404 returned error can't find the container with id a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e Mar 13 12:37:40.179269 master-0 kubenswrapper[3989]: I0313 12:37:40.177450 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-ztpxf"] Mar 13 12:37:40.179269 master-0 kubenswrapper[3989]: I0313 12:37:40.177979 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.179269 master-0 kubenswrapper[3989]: E0313 12:37:40.178104 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:40.313232 master-0 kubenswrapper[3989]: I0313 12:37:40.313124 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.313232 master-0 kubenswrapper[3989]: I0313 12:37:40.313213 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.413666 master-0 kubenswrapper[3989]: I0313 12:37:40.413591 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.413666 master-0 kubenswrapper[3989]: I0313 12:37:40.413675 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.414035 master-0 kubenswrapper[3989]: E0313 12:37:40.413833 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:40.414035 master-0 kubenswrapper[3989]: E0313 12:37:40.413885 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:37:40.913870861 +0000 UTC m=+65.032338498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:40.431478 master-0 kubenswrapper[3989]: I0313 12:37:40.431331 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.682858 master-0 kubenswrapper[3989]: I0313 12:37:40.682731 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerStarted","Data":"a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e"} Mar 13 12:37:40.917203 master-0 kubenswrapper[3989]: I0313 12:37:40.917149 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:40.917434 master-0 kubenswrapper[3989]: E0313 12:37:40.917312 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:40.917434 master-0 kubenswrapper[3989]: E0313 12:37:40.917383 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:37:41.917365396 +0000 UTC m=+66.035833033 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:41.925410 master-0 kubenswrapper[3989]: I0313 12:37:41.925258 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:41.926142 master-0 kubenswrapper[3989]: E0313 12:37:41.925424 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:41.926142 master-0 kubenswrapper[3989]: E0313 12:37:41.925488 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:37:43.925472901 +0000 UTC m=+68.043940538 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:41.938381 master-0 kubenswrapper[3989]: I0313 12:37:41.938312 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:41.938623 master-0 kubenswrapper[3989]: E0313 12:37:41.938462 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:43.824907 master-0 kubenswrapper[3989]: I0313 12:37:43.691936 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="18a2b972b6d690603207972c9280fdef39401c1fb14724697481249e3cdd3fe3" exitCode=0 Mar 13 12:37:43.824907 master-0 kubenswrapper[3989]: I0313 12:37:43.692028 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"18a2b972b6d690603207972c9280fdef39401c1fb14724697481249e3cdd3fe3"} Mar 13 12:37:43.926077 master-0 kubenswrapper[3989]: I0313 12:37:43.925981 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:43.926528 master-0 kubenswrapper[3989]: E0313 12:37:43.926249 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:43.926528 master-0 kubenswrapper[3989]: E0313 12:37:43.926329 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:37:47.92630947 +0000 UTC m=+72.044777107 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:43.937376 master-0 kubenswrapper[3989]: I0313 12:37:43.937305 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:43.937687 master-0 kubenswrapper[3989]: E0313 12:37:43.937478 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:45.937611 master-0 kubenswrapper[3989]: I0313 12:37:45.937532 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:45.938406 master-0 kubenswrapper[3989]: E0313 12:37:45.937735 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:47.938214 master-0 kubenswrapper[3989]: I0313 12:37:47.938032 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:47.943761 master-0 kubenswrapper[3989]: E0313 12:37:47.938328 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:47.953815 master-0 kubenswrapper[3989]: I0313 12:37:47.953735 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:37:47.990435 master-0 kubenswrapper[3989]: I0313 12:37:47.989631 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:47.990435 master-0 kubenswrapper[3989]: E0313 12:37:47.989862 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:47.990435 master-0 kubenswrapper[3989]: E0313 12:37:47.990023 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:37:55.989953749 +0000 UTC m=+80.108421386 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:48.122330 master-0 kubenswrapper[3989]: W0313 12:37:48.122220 3989 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:37:49.938245 master-0 kubenswrapper[3989]: I0313 12:37:49.938016 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:49.938245 master-0 kubenswrapper[3989]: E0313 12:37:49.938509 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:50.410023 master-0 kubenswrapper[3989]: I0313 12:37:50.409948 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:37:50.410267 master-0 kubenswrapper[3989]: E0313 12:37:50.410126 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:50.410267 master-0 kubenswrapper[3989]: E0313 12:37:50.410197 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:22.410178676 +0000 UTC m=+106.528646313 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:37:51.588118 master-0 kubenswrapper[3989]: I0313 12:37:51.587538 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf"] Mar 13 12:37:51.594014 master-0 kubenswrapper[3989]: I0313 12:37:51.593540 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.595892 master-0 kubenswrapper[3989]: I0313 12:37:51.595838 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:37:51.597062 master-0 kubenswrapper[3989]: I0313 12:37:51.597038 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:37:51.597468 master-0 kubenswrapper[3989]: I0313 12:37:51.597448 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:37:51.597706 master-0 kubenswrapper[3989]: I0313 12:37:51.597675 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:37:51.598029 master-0 kubenswrapper[3989]: I0313 12:37:51.597998 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:37:51.623605 master-0 kubenswrapper[3989]: I0313 12:37:51.623363 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=4.623344964 podStartE2EDuration="4.623344964s" podCreationTimestamp="2026-03-13 12:37:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:37:51.622460877 +0000 UTC m=+75.740928514" watchObservedRunningTime="2026-03-13 12:37:51.623344964 +0000 UTC m=+75.741812621" Mar 13 12:37:51.770159 master-0 kubenswrapper[3989]: I0313 12:37:51.770070 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.770443 master-0 kubenswrapper[3989]: I0313 12:37:51.770199 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.770443 master-0 kubenswrapper[3989]: I0313 12:37:51.770228 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.770443 master-0 kubenswrapper[3989]: I0313 12:37:51.770258 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.799309 master-0 kubenswrapper[3989]: I0313 12:37:51.799138 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwczt"] Mar 13 12:37:51.800352 master-0 kubenswrapper[3989]: I0313 12:37:51.800237 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.804535 master-0 kubenswrapper[3989]: I0313 12:37:51.804397 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:37:51.804756 master-0 kubenswrapper[3989]: I0313 12:37:51.804673 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:37:51.870711 master-0 kubenswrapper[3989]: I0313 12:37:51.870492 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.870711 master-0 kubenswrapper[3989]: I0313 12:37:51.870698 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.870947 master-0 kubenswrapper[3989]: I0313 12:37:51.870720 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.870947 master-0 kubenswrapper[3989]: I0313 12:37:51.870743 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.871498 master-0 kubenswrapper[3989]: I0313 12:37:51.871458 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.871930 master-0 kubenswrapper[3989]: I0313 12:37:51.871726 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.883501 master-0 kubenswrapper[3989]: I0313 12:37:51.883451 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.889180 master-0 kubenswrapper[3989]: I0313 12:37:51.888762 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.937987 master-0 kubenswrapper[3989]: I0313 12:37:51.937909 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:51.938276 master-0 kubenswrapper[3989]: E0313 12:37:51.938097 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:51.958027 master-0 kubenswrapper[3989]: I0313 12:37:51.957962 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:37:51.971724 master-0 kubenswrapper[3989]: I0313 12:37:51.971686 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.971724 master-0 kubenswrapper[3989]: I0313 12:37:51.971725 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.971912 master-0 kubenswrapper[3989]: I0313 12:37:51.971752 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.971912 master-0 kubenswrapper[3989]: I0313 12:37:51.971831 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.971912 master-0 kubenswrapper[3989]: I0313 12:37:51.971884 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972009 master-0 kubenswrapper[3989]: I0313 12:37:51.971931 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972009 master-0 kubenswrapper[3989]: I0313 12:37:51.971962 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972009 master-0 kubenswrapper[3989]: I0313 12:37:51.971986 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972144 master-0 kubenswrapper[3989]: I0313 12:37:51.972016 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972144 master-0 kubenswrapper[3989]: I0313 12:37:51.972056 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-78kjl\" (UniqueName: \"kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972144 master-0 kubenswrapper[3989]: I0313 12:37:51.972074 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972144 master-0 kubenswrapper[3989]: I0313 12:37:51.972091 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972143 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972177 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972195 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972239 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972270 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972307 master-0 kubenswrapper[3989]: I0313 12:37:51.972295 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972520 master-0 kubenswrapper[3989]: I0313 12:37:51.972318 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:51.972520 master-0 kubenswrapper[3989]: I0313 12:37:51.972341 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073045 master-0 kubenswrapper[3989]: I0313 12:37:52.072991 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073302 master-0 kubenswrapper[3989]: I0313 12:37:52.073085 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073302 master-0 kubenswrapper[3989]: I0313 12:37:52.073119 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073406 master-0 kubenswrapper[3989]: I0313 12:37:52.073126 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073406 master-0 kubenswrapper[3989]: I0313 12:37:52.073163 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073406 master-0 kubenswrapper[3989]: I0313 12:37:52.073201 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073406 master-0 kubenswrapper[3989]: I0313 12:37:52.073315 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073406 master-0 kubenswrapper[3989]: I0313 12:37:52.073402 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073676 master-0 kubenswrapper[3989]: I0313 12:37:52.073645 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073781 master-0 kubenswrapper[3989]: I0313 12:37:52.073655 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.073906 master-0 kubenswrapper[3989]: I0313 12:37:52.073876 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-78kjl\" (UniqueName: \"kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.074001 master-0 kubenswrapper[3989]: I0313 12:37:52.073977 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.074160 master-0 kubenswrapper[3989]: I0313 12:37:52.074110 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074021 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074535 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074703 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074730 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074748 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074820 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074913 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074964 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.074991 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.075030 master-0 kubenswrapper[3989]: I0313 12:37:52.075063 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075143 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075188 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075283 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.074282 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075599 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075606 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075657 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075694 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075720 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076083 master-0 kubenswrapper[3989]: I0313 12:37:52.075757 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076608 master-0 kubenswrapper[3989]: I0313 12:37:52.076402 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076608 master-0 kubenswrapper[3989]: I0313 12:37:52.076460 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076608 master-0 kubenswrapper[3989]: I0313 12:37:52.076483 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.076608 master-0 kubenswrapper[3989]: I0313 12:37:52.076516 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.077328 master-0 kubenswrapper[3989]: I0313 12:37:52.076610 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.078203 master-0 kubenswrapper[3989]: I0313 12:37:52.078163 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.094747 master-0 kubenswrapper[3989]: I0313 12:37:52.094668 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-78kjl\" (UniqueName: \"kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl\") pod \"ovnkube-node-vwczt\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:52.118589 master-0 kubenswrapper[3989]: I0313 12:37:52.118450 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:37:53.767824 master-0 kubenswrapper[3989]: I0313 12:37:53.767734 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="9caf396b8c5078621fb7d9a89a4bf5d4e00c4dccbb5c00252204a9ac1a3b5d3b" exitCode=0 Mar 13 12:37:53.767824 master-0 kubenswrapper[3989]: I0313 12:37:53.767820 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"9caf396b8c5078621fb7d9a89a4bf5d4e00c4dccbb5c00252204a9ac1a3b5d3b"} Mar 13 12:37:53.772470 master-0 kubenswrapper[3989]: I0313 12:37:53.772379 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerStarted","Data":"c871b2dccdf32a31560c07f43adc4a4331aa74aa3287e97222705a0080bb9f2a"} Mar 13 12:37:53.772470 master-0 kubenswrapper[3989]: I0313 12:37:53.772465 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerStarted","Data":"6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc"} Mar 13 12:37:53.773812 master-0 kubenswrapper[3989]: I0313 12:37:53.773764 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" event={"ID":"03e4c9a0-202f-4cdd-905c-2913d9490e22","Type":"ContainerStarted","Data":"a60763a7af9aa97d9952f2c28850d94ff02a0e2c2425e99313bff4a66fc9e4da"} Mar 13 12:37:53.937603 master-0 kubenswrapper[3989]: I0313 12:37:53.937525 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:53.937825 master-0 kubenswrapper[3989]: E0313 12:37:53.937679 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:54.777697 master-0 kubenswrapper[3989]: I0313 12:37:54.777645 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-jjmb8"] Mar 13 12:37:54.778512 master-0 kubenswrapper[3989]: I0313 12:37:54.778291 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:54.778512 master-0 kubenswrapper[3989]: E0313 12:37:54.778356 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:37:54.916720 master-0 kubenswrapper[3989]: I0313 12:37:54.916643 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:55.017265 master-0 kubenswrapper[3989]: I0313 12:37:55.017184 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:55.083547 master-0 kubenswrapper[3989]: E0313 12:37:55.083405 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:37:55.083547 master-0 kubenswrapper[3989]: E0313 12:37:55.083472 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:37:55.083547 master-0 kubenswrapper[3989]: E0313 12:37:55.083496 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:55.083839 master-0 kubenswrapper[3989]: E0313 12:37:55.083624 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:55.583595366 +0000 UTC m=+79.702063003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:55.625410 master-0 kubenswrapper[3989]: I0313 12:37:55.625338 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:55.625700 master-0 kubenswrapper[3989]: E0313 12:37:55.625509 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:37:55.625700 master-0 kubenswrapper[3989]: E0313 12:37:55.625532 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:37:55.625700 master-0 kubenswrapper[3989]: E0313 12:37:55.625550 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:55.625700 master-0 kubenswrapper[3989]: E0313 12:37:55.625637 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:56.625619821 +0000 UTC m=+80.744087458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:55.937697 master-0 kubenswrapper[3989]: I0313 12:37:55.937611 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:55.938377 master-0 kubenswrapper[3989]: E0313 12:37:55.937762 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:37:55.938377 master-0 kubenswrapper[3989]: I0313 12:37:55.937834 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:55.938377 master-0 kubenswrapper[3989]: E0313 12:37:55.937884 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:56.028861 master-0 kubenswrapper[3989]: I0313 12:37:56.028790 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:56.029095 master-0 kubenswrapper[3989]: E0313 12:37:56.028950 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:56.029095 master-0 kubenswrapper[3989]: E0313 12:37:56.029006 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:38:12.028990259 +0000 UTC m=+96.147457896 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:37:56.654480 master-0 kubenswrapper[3989]: I0313 12:37:56.654397 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:56.654789 master-0 kubenswrapper[3989]: E0313 12:37:56.654640 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:37:56.654789 master-0 kubenswrapper[3989]: E0313 12:37:56.654683 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:37:56.654789 master-0 kubenswrapper[3989]: E0313 12:37:56.654696 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:56.654789 master-0 kubenswrapper[3989]: E0313 12:37:56.654769 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.654751461 +0000 UTC m=+82.773219098 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:57.937818 master-0 kubenswrapper[3989]: I0313 12:37:57.937739 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:57.937818 master-0 kubenswrapper[3989]: I0313 12:37:57.937786 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:57.938568 master-0 kubenswrapper[3989]: E0313 12:37:57.937957 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:37:57.938568 master-0 kubenswrapper[3989]: E0313 12:37:57.938074 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:37:58.032809 master-0 kubenswrapper[3989]: I0313 12:37:58.031290 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-kb5r7"] Mar 13 12:37:58.032809 master-0 kubenswrapper[3989]: I0313 12:37:58.031760 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.040634 master-0 kubenswrapper[3989]: I0313 12:37:58.036812 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:37:58.040634 master-0 kubenswrapper[3989]: I0313 12:37:58.038173 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:37:58.040634 master-0 kubenswrapper[3989]: I0313 12:37:58.038281 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:37:58.041866 master-0 kubenswrapper[3989]: I0313 12:37:58.041445 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:37:58.041866 master-0 kubenswrapper[3989]: I0313 12:37:58.041642 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:37:58.167507 master-0 kubenswrapper[3989]: I0313 12:37:58.167444 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.169427 master-0 kubenswrapper[3989]: I0313 12:37:58.168173 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.169427 master-0 kubenswrapper[3989]: I0313 12:37:58.168265 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.169427 master-0 kubenswrapper[3989]: I0313 12:37:58.168306 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.269378 master-0 kubenswrapper[3989]: I0313 12:37:58.269225 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.269378 master-0 kubenswrapper[3989]: I0313 12:37:58.269301 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.269378 master-0 kubenswrapper[3989]: I0313 12:37:58.269326 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.269378 master-0 kubenswrapper[3989]: I0313 12:37:58.269343 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.270040 master-0 kubenswrapper[3989]: E0313 12:37:58.270020 3989 secret.go:189] Couldn't get secret openshift-network-node-identity/network-node-identity-cert: secret "network-node-identity-cert" not found Mar 13 12:37:58.270214 master-0 kubenswrapper[3989]: E0313 12:37:58.270077 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert podName:cf580693-2931-4fef-adb5-b396f7303352 nodeName:}" failed. No retries permitted until 2026-03-13 12:37:58.770053419 +0000 UTC m=+82.888521056 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert") pod "network-node-identity-kb5r7" (UID: "cf580693-2931-4fef-adb5-b396f7303352") : secret "network-node-identity-cert" not found Mar 13 12:37:58.270959 master-0 kubenswrapper[3989]: I0313 12:37:58.270937 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.271752 master-0 kubenswrapper[3989]: I0313 12:37:58.271729 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.288963 master-0 kubenswrapper[3989]: I0313 12:37:58.288826 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.675418 master-0 kubenswrapper[3989]: I0313 12:37:58.675353 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:58.675754 master-0 kubenswrapper[3989]: E0313 12:37:58.675569 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:37:58.675754 master-0 kubenswrapper[3989]: E0313 12:37:58.675613 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:37:58.675754 master-0 kubenswrapper[3989]: E0313 12:37:58.675634 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:58.675754 master-0 kubenswrapper[3989]: E0313 12:37:58.675722 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:02.675704857 +0000 UTC m=+86.794172494 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:37:58.776596 master-0 kubenswrapper[3989]: I0313 12:37:58.776486 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.782614 master-0 kubenswrapper[3989]: I0313 12:37:58.782556 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.893303 master-0 kubenswrapper[3989]: I0313 12:37:58.893243 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="68ab991a1ca1a43041140e5538bac0164a9cb6cf676c5102e75b42f612a72d9d" exitCode=0 Mar 13 12:37:58.893592 master-0 kubenswrapper[3989]: I0313 12:37:58.893351 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"68ab991a1ca1a43041140e5538bac0164a9cb6cf676c5102e75b42f612a72d9d"} Mar 13 12:37:58.898950 master-0 kubenswrapper[3989]: I0313 12:37:58.898296 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-6c7r9" event={"ID":"ffcc3a23-d81c-4064-a24a-857dbe3222c8","Type":"ContainerStarted","Data":"28d8b25db2731335cd3f24ed2379cb78c8359655dce9992e8ce5b272cfe285d7"} Mar 13 12:37:58.953136 master-0 kubenswrapper[3989]: I0313 12:37:58.953078 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:37:58.970177 master-0 kubenswrapper[3989]: W0313 12:37:58.970113 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf580693_2931_4fef_adb5_b396f7303352.slice/crio-31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2 WatchSource:0}: Error finding container 31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2: Status 404 returned error can't find the container with id 31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2 Mar 13 12:37:59.902968 master-0 kubenswrapper[3989]: I0313 12:37:59.902887 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kb5r7" event={"ID":"cf580693-2931-4fef-adb5-b396f7303352","Type":"ContainerStarted","Data":"31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2"} Mar 13 12:37:59.937450 master-0 kubenswrapper[3989]: I0313 12:37:59.937381 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:37:59.937708 master-0 kubenswrapper[3989]: I0313 12:37:59.937453 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:37:59.937708 master-0 kubenswrapper[3989]: E0313 12:37:59.937527 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:37:59.937960 master-0 kubenswrapper[3989]: E0313 12:37:59.937785 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:01.914158 master-0 kubenswrapper[3989]: I0313 12:38:01.913249 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="1b6e1b00449d4ad0069d761f09fd31eb925ff8c4773bf223a962c96f72589083" exitCode=0 Mar 13 12:38:01.914158 master-0 kubenswrapper[3989]: I0313 12:38:01.913441 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"1b6e1b00449d4ad0069d761f09fd31eb925ff8c4773bf223a962c96f72589083"} Mar 13 12:38:01.937773 master-0 kubenswrapper[3989]: I0313 12:38:01.937726 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:01.938026 master-0 kubenswrapper[3989]: I0313 12:38:01.937733 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:01.938026 master-0 kubenswrapper[3989]: E0313 12:38:01.937910 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:01.938179 master-0 kubenswrapper[3989]: E0313 12:38:01.938033 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:01.943863 master-0 kubenswrapper[3989]: I0313 12:38:01.943749 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-6c7r9" podStartSLOduration=4.532572957 podStartE2EDuration="22.943720148s" podCreationTimestamp="2026-03-13 12:37:39 +0000 UTC" firstStartedPulling="2026-03-13 12:37:39.533382975 +0000 UTC m=+63.651850612" lastFinishedPulling="2026-03-13 12:37:57.944530166 +0000 UTC m=+82.062997803" observedRunningTime="2026-03-13 12:37:58.939170209 +0000 UTC m=+83.057637846" watchObservedRunningTime="2026-03-13 12:38:01.943720148 +0000 UTC m=+86.062187785" Mar 13 12:38:02.690783 master-0 kubenswrapper[3989]: I0313 12:38:02.690698 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:02.691054 master-0 kubenswrapper[3989]: E0313 12:38:02.690964 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:38:02.691054 master-0 kubenswrapper[3989]: E0313 12:38:02.691003 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:38:02.691054 master-0 kubenswrapper[3989]: E0313 12:38:02.691018 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:02.691222 master-0 kubenswrapper[3989]: E0313 12:38:02.691109 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:10.691085376 +0000 UTC m=+94.809553083 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:03.937716 master-0 kubenswrapper[3989]: I0313 12:38:03.937438 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:03.937716 master-0 kubenswrapper[3989]: I0313 12:38:03.937444 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:03.937716 master-0 kubenswrapper[3989]: E0313 12:38:03.937702 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:03.939216 master-0 kubenswrapper[3989]: E0313 12:38:03.937868 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:05.111371 master-0 kubenswrapper[3989]: I0313 12:38:05.111159 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:05.111371 master-0 kubenswrapper[3989]: I0313 12:38:05.111216 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:05.112215 master-0 kubenswrapper[3989]: E0313 12:38:05.111390 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:05.112215 master-0 kubenswrapper[3989]: E0313 12:38:05.111499 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:06.939143 master-0 kubenswrapper[3989]: I0313 12:38:06.937810 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:06.939143 master-0 kubenswrapper[3989]: I0313 12:38:06.937874 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:06.939143 master-0 kubenswrapper[3989]: E0313 12:38:06.938637 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:06.939143 master-0 kubenswrapper[3989]: E0313 12:38:06.938776 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:08.952626 master-0 kubenswrapper[3989]: I0313 12:38:08.946000 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:08.952626 master-0 kubenswrapper[3989]: E0313 12:38:08.946273 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:08.952626 master-0 kubenswrapper[3989]: I0313 12:38:08.947115 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:08.952626 master-0 kubenswrapper[3989]: E0313 12:38:08.947218 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:10.703333 master-0 kubenswrapper[3989]: I0313 12:38:10.703232 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:10.704412 master-0 kubenswrapper[3989]: E0313 12:38:10.703614 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:38:10.704412 master-0 kubenswrapper[3989]: E0313 12:38:10.703649 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:38:10.704412 master-0 kubenswrapper[3989]: E0313 12:38:10.703663 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:10.704412 master-0 kubenswrapper[3989]: E0313 12:38:10.703770 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:26.703735688 +0000 UTC m=+110.822203325 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:11.131693 master-0 kubenswrapper[3989]: I0313 12:38:11.131050 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:11.131693 master-0 kubenswrapper[3989]: E0313 12:38:11.131228 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:11.131693 master-0 kubenswrapper[3989]: I0313 12:38:11.131401 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:11.131693 master-0 kubenswrapper[3989]: E0313 12:38:11.131531 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:11.964242 master-0 kubenswrapper[3989]: I0313 12:38:11.964093 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:38:11.965480 master-0 kubenswrapper[3989]: I0313 12:38:11.964830 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:38:12.044513 master-0 kubenswrapper[3989]: I0313 12:38:12.044263 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:12.045830 master-0 kubenswrapper[3989]: E0313 12:38:12.044683 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:38:12.045830 master-0 kubenswrapper[3989]: E0313 12:38:12.044859 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:38:44.044802565 +0000 UTC m=+128.163270262 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 12:38:12.938018 master-0 kubenswrapper[3989]: I0313 12:38:12.937944 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:12.938018 master-0 kubenswrapper[3989]: I0313 12:38:12.937997 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:12.938424 master-0 kubenswrapper[3989]: E0313 12:38:12.938105 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:12.938424 master-0 kubenswrapper[3989]: E0313 12:38:12.938245 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:14.938081 master-0 kubenswrapper[3989]: I0313 12:38:14.938012 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:14.938812 master-0 kubenswrapper[3989]: I0313 12:38:14.938128 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:14.938812 master-0 kubenswrapper[3989]: E0313 12:38:14.938148 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:14.938812 master-0 kubenswrapper[3989]: E0313 12:38:14.938287 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:16.938256 master-0 kubenswrapper[3989]: I0313 12:38:16.938173 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:16.938844 master-0 kubenswrapper[3989]: I0313 12:38:16.938256 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:16.939359 master-0 kubenswrapper[3989]: E0313 12:38:16.939091 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:16.939359 master-0 kubenswrapper[3989]: E0313 12:38:16.939270 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:17.034567 master-0 kubenswrapper[3989]: I0313 12:38:17.034247 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=6.034202095 podStartE2EDuration="6.034202095s" podCreationTimestamp="2026-03-13 12:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:16.989872417 +0000 UTC m=+101.108340054" watchObservedRunningTime="2026-03-13 12:38:17.034202095 +0000 UTC m=+101.152669732" Mar 13 12:38:17.034567 master-0 kubenswrapper[3989]: I0313 12:38:17.034442 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=6.034434502 podStartE2EDuration="6.034434502s" podCreationTimestamp="2026-03-13 12:38:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:17.033486703 +0000 UTC m=+101.151954350" watchObservedRunningTime="2026-03-13 12:38:17.034434502 +0000 UTC m=+101.152902139" Mar 13 12:38:18.405125 master-0 kubenswrapper[3989]: I0313 12:38:18.405070 3989 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwczt"] Mar 13 12:38:18.937560 master-0 kubenswrapper[3989]: I0313 12:38:18.937240 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:18.937560 master-0 kubenswrapper[3989]: I0313 12:38:18.937350 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:18.937889 master-0 kubenswrapper[3989]: E0313 12:38:18.937360 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:18.938032 master-0 kubenswrapper[3989]: E0313 12:38:18.937512 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:20.953703 master-0 kubenswrapper[3989]: I0313 12:38:20.953353 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:20.953703 master-0 kubenswrapper[3989]: I0313 12:38:20.953373 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:20.953703 master-0 kubenswrapper[3989]: E0313 12:38:20.953651 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:20.953703 master-0 kubenswrapper[3989]: E0313 12:38:20.953743 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:22.234730 master-0 kubenswrapper[3989]: I0313 12:38:22.234659 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="261cbab4cc990a283086b5578b976b53ce06514cd8246e1d92485867a0760ce8" exitCode=0 Mar 13 12:38:22.234730 master-0 kubenswrapper[3989]: I0313 12:38:22.234757 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"261cbab4cc990a283086b5578b976b53ce06514cd8246e1d92485867a0760ce8"} Mar 13 12:38:22.236991 master-0 kubenswrapper[3989]: I0313 12:38:22.236943 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerStarted","Data":"8c953c8136772ca565e28cae4ca94f4cbf7b11aff2c6a974b20aeadfaf72a3c5"} Mar 13 12:38:22.238544 master-0 kubenswrapper[3989]: I0313 12:38:22.238490 3989 generic.go:334] "Generic (PLEG): container finished" podID="03e4c9a0-202f-4cdd-905c-2913d9490e22" containerID="7e5bba02e2dca208a3af3af8eb2d7b64b9be8439e116c84112074beabd4f0f01" exitCode=0 Mar 13 12:38:22.238659 master-0 kubenswrapper[3989]: I0313 12:38:22.238636 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" event={"ID":"03e4c9a0-202f-4cdd-905c-2913d9490e22","Type":"ContainerDied","Data":"7e5bba02e2dca208a3af3af8eb2d7b64b9be8439e116c84112074beabd4f0f01"} Mar 13 12:38:22.242807 master-0 kubenswrapper[3989]: I0313 12:38:22.242775 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kb5r7" event={"ID":"cf580693-2931-4fef-adb5-b396f7303352","Type":"ContainerStarted","Data":"bab02b7b0881c5a887bb7f5e343fcd3261971bd3b26625df2ad95a1d14f0e4fa"} Mar 13 12:38:22.242863 master-0 kubenswrapper[3989]: I0313 12:38:22.242817 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kb5r7" event={"ID":"cf580693-2931-4fef-adb5-b396f7303352","Type":"ContainerStarted","Data":"f53fa4951ac46c6b658ea43eb63bbf5f196ce8eab68ef66380de5ea66f33f3c5"} Mar 13 12:38:22.249673 master-0 kubenswrapper[3989]: I0313 12:38:22.249619 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:38:22.370887 master-0 kubenswrapper[3989]: I0313 12:38:22.370780 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" podStartSLOduration=3.145182876 podStartE2EDuration="31.370754951s" podCreationTimestamp="2026-03-13 12:37:51 +0000 UTC" firstStartedPulling="2026-03-13 12:37:53.461931833 +0000 UTC m=+77.580399470" lastFinishedPulling="2026-03-13 12:38:21.687503918 +0000 UTC m=+105.805971545" observedRunningTime="2026-03-13 12:38:22.369604135 +0000 UTC m=+106.488071812" watchObservedRunningTime="2026-03-13 12:38:22.370754951 +0000 UTC m=+106.489222588" Mar 13 12:38:22.386168 master-0 kubenswrapper[3989]: I0313 12:38:22.385972 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-kb5r7" podStartSLOduration=1.849964734 podStartE2EDuration="24.385953607s" podCreationTimestamp="2026-03-13 12:37:58 +0000 UTC" firstStartedPulling="2026-03-13 12:37:58.973111799 +0000 UTC m=+83.091579436" lastFinishedPulling="2026-03-13 12:38:21.509100672 +0000 UTC m=+105.627568309" observedRunningTime="2026-03-13 12:38:22.385188153 +0000 UTC m=+106.503655800" watchObservedRunningTime="2026-03-13 12:38:22.385953607 +0000 UTC m=+106.504421244" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444602 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444682 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444712 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444735 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-78kjl\" (UniqueName: \"kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444760 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444762 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444794 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444821 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444842 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444874 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444903 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444880 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444960 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.444930 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.445000 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket" (OuterVolumeSpecName: "log-socket") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.445013 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.445064 master-0 kubenswrapper[3989]: I0313 12:38:22.445050 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445091 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445119 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445147 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445179 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445196 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log" (OuterVolumeSpecName: "node-log") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445202 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445222 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445237 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445259 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445284 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445304 3989 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch\") pod \"03e4c9a0-202f-4cdd-905c-2913d9490e22\" (UID: \"03e4c9a0-202f-4cdd-905c-2913d9490e22\") " Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445313 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445344 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445365 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445389 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445411 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash" (OuterVolumeSpecName: "host-slash") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446083 master-0 kubenswrapper[3989]: I0313 12:38:22.445459 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.445471 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.445479 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446046 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446224 3989 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446424 3989 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: E0313 12:38:22.446645 3989 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446665 3989 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-node-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: E0313 12:38:22.446711 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:26.446694647 +0000 UTC m=+170.565162284 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446710 3989 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446754 3989 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446773 3989 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446790 3989 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446804 3989 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446815 3989 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446833 3989 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.446868 master-0 kubenswrapper[3989]: I0313 12:38:22.446850 3989 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.447593 master-0 kubenswrapper[3989]: I0313 12:38:22.446914 3989 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.447593 master-0 kubenswrapper[3989]: I0313 12:38:22.446931 3989 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.447593 master-0 kubenswrapper[3989]: I0313 12:38:22.446941 3989 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.447593 master-0 kubenswrapper[3989]: I0313 12:38:22.446958 3989 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.447593 master-0 kubenswrapper[3989]: I0313 12:38:22.446969 3989 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/03e4c9a0-202f-4cdd-905c-2913d9490e22-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.448543 master-0 kubenswrapper[3989]: I0313 12:38:22.448483 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.448543 master-0 kubenswrapper[3989]: I0313 12:38:22.448519 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:38:22.454000 master-0 kubenswrapper[3989]: I0313 12:38:22.453948 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:38:22.454144 master-0 kubenswrapper[3989]: I0313 12:38:22.454051 3989 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl" (OuterVolumeSpecName: "kube-api-access-78kjl") pod "03e4c9a0-202f-4cdd-905c-2913d9490e22" (UID: "03e4c9a0-202f-4cdd-905c-2913d9490e22"). InnerVolumeSpecName "kube-api-access-78kjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:38:22.547437 master-0 kubenswrapper[3989]: I0313 12:38:22.547385 3989 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.547437 master-0 kubenswrapper[3989]: I0313 12:38:22.547419 3989 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.547437 master-0 kubenswrapper[3989]: I0313 12:38:22.547429 3989 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-78kjl\" (UniqueName: \"kubernetes.io/projected/03e4c9a0-202f-4cdd-905c-2913d9490e22-kube-api-access-78kjl\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.547437 master-0 kubenswrapper[3989]: I0313 12:38:22.547438 3989 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/03e4c9a0-202f-4cdd-905c-2913d9490e22-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 13 12:38:22.937749 master-0 kubenswrapper[3989]: I0313 12:38:22.937643 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:22.937749 master-0 kubenswrapper[3989]: I0313 12:38:22.937708 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:22.938040 master-0 kubenswrapper[3989]: E0313 12:38:22.937856 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:22.938040 master-0 kubenswrapper[3989]: E0313 12:38:22.937992 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:23.248834 master-0 kubenswrapper[3989]: I0313 12:38:23.248643 3989 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="0552724532a0871797536a0fa5461171eaa5b983641df0c9e3100001409bbe97" exitCode=0 Mar 13 12:38:23.248834 master-0 kubenswrapper[3989]: I0313 12:38:23.248791 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerDied","Data":"0552724532a0871797536a0fa5461171eaa5b983641df0c9e3100001409bbe97"} Mar 13 12:38:23.251766 master-0 kubenswrapper[3989]: I0313 12:38:23.251138 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" event={"ID":"03e4c9a0-202f-4cdd-905c-2913d9490e22","Type":"ContainerDied","Data":"a60763a7af9aa97d9952f2c28850d94ff02a0e2c2425e99313bff4a66fc9e4da"} Mar 13 12:38:23.251933 master-0 kubenswrapper[3989]: I0313 12:38:23.251891 3989 scope.go:117] "RemoveContainer" containerID="7e5bba02e2dca208a3af3af8eb2d7b64b9be8439e116c84112074beabd4f0f01" Mar 13 12:38:23.252938 master-0 kubenswrapper[3989]: I0313 12:38:23.252888 3989 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vwczt" Mar 13 12:38:23.314732 master-0 kubenswrapper[3989]: I0313 12:38:23.314387 3989 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwczt"] Mar 13 12:38:23.319121 master-0 kubenswrapper[3989]: I0313 12:38:23.319075 3989 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vwczt"] Mar 13 12:38:23.336230 master-0 kubenswrapper[3989]: I0313 12:38:23.336173 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-vlrf6"] Mar 13 12:38:23.337043 master-0 kubenswrapper[3989]: E0313 12:38:23.337013 3989 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e4c9a0-202f-4cdd-905c-2913d9490e22" containerName="kubecfg-setup" Mar 13 12:38:23.337131 master-0 kubenswrapper[3989]: I0313 12:38:23.337084 3989 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e4c9a0-202f-4cdd-905c-2913d9490e22" containerName="kubecfg-setup" Mar 13 12:38:23.337213 master-0 kubenswrapper[3989]: I0313 12:38:23.337184 3989 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e4c9a0-202f-4cdd-905c-2913d9490e22" containerName="kubecfg-setup" Mar 13 12:38:23.338666 master-0 kubenswrapper[3989]: I0313 12:38:23.338426 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.344019 master-0 kubenswrapper[3989]: I0313 12:38:23.341970 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:38:23.344019 master-0 kubenswrapper[3989]: I0313 12:38:23.342195 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:38:23.357138 master-0 kubenswrapper[3989]: I0313 12:38:23.357077 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357138 master-0 kubenswrapper[3989]: I0313 12:38:23.357127 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357138 master-0 kubenswrapper[3989]: I0313 12:38:23.357144 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357393 master-0 kubenswrapper[3989]: I0313 12:38:23.357162 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357393 master-0 kubenswrapper[3989]: I0313 12:38:23.357179 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357393 master-0 kubenswrapper[3989]: I0313 12:38:23.357227 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357393 master-0 kubenswrapper[3989]: I0313 12:38:23.357315 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357393 master-0 kubenswrapper[3989]: I0313 12:38:23.357368 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357663 master-0 kubenswrapper[3989]: I0313 12:38:23.357400 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357663 master-0 kubenswrapper[3989]: I0313 12:38:23.357431 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357929 master-0 kubenswrapper[3989]: I0313 12:38:23.357894 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357982 master-0 kubenswrapper[3989]: I0313 12:38:23.357952 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.357982 master-0 kubenswrapper[3989]: I0313 12:38:23.357972 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358165 master-0 kubenswrapper[3989]: I0313 12:38:23.358136 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358210 master-0 kubenswrapper[3989]: I0313 12:38:23.358170 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358210 master-0 kubenswrapper[3989]: I0313 12:38:23.358204 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358290 master-0 kubenswrapper[3989]: I0313 12:38:23.358224 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358290 master-0 kubenswrapper[3989]: I0313 12:38:23.358255 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358290 master-0 kubenswrapper[3989]: I0313 12:38:23.358280 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.358510 master-0 kubenswrapper[3989]: I0313 12:38:23.358470 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.515673 master-0 kubenswrapper[3989]: I0313 12:38:23.515472 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.515673 master-0 kubenswrapper[3989]: I0313 12:38:23.515612 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.515673 master-0 kubenswrapper[3989]: I0313 12:38:23.515650 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.515673 master-0 kubenswrapper[3989]: I0313 12:38:23.515678 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515715 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515737 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515834 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515879 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515909 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515941 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515967 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.515986 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.516003 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.516044 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.516069 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516085 master-0 kubenswrapper[3989]: I0313 12:38:23.516098 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516971 master-0 kubenswrapper[3989]: I0313 12:38:23.516125 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516971 master-0 kubenswrapper[3989]: I0313 12:38:23.516152 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516971 master-0 kubenswrapper[3989]: I0313 12:38:23.516171 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516971 master-0 kubenswrapper[3989]: I0313 12:38:23.516194 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.516971 master-0 kubenswrapper[3989]: I0313 12:38:23.516407 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517626 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517717 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517751 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517782 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517812 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517852 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517884 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.518953 master-0 kubenswrapper[3989]: I0313 12:38:23.517914 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520563 master-0 kubenswrapper[3989]: I0313 12:38:23.519868 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520563 master-0 kubenswrapper[3989]: I0313 12:38:23.519950 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520563 master-0 kubenswrapper[3989]: I0313 12:38:23.519983 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520563 master-0 kubenswrapper[3989]: I0313 12:38:23.520490 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520788 master-0 kubenswrapper[3989]: I0313 12:38:23.520546 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520788 master-0 kubenswrapper[3989]: I0313 12:38:23.520685 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520788 master-0 kubenswrapper[3989]: I0313 12:38:23.520723 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.520788 master-0 kubenswrapper[3989]: I0313 12:38:23.520757 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.531609 master-0 kubenswrapper[3989]: I0313 12:38:23.530745 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.531609 master-0 kubenswrapper[3989]: I0313 12:38:23.530879 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.560195 master-0 kubenswrapper[3989]: I0313 12:38:23.557204 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.658410 master-0 kubenswrapper[3989]: I0313 12:38:23.658329 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:23.671370 master-0 kubenswrapper[3989]: W0313 12:38:23.671277 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ad68c2d_762a_47ed_bd56_e823a83b9087.slice/crio-a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070 WatchSource:0}: Error finding container a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070: Status 404 returned error can't find the container with id a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070 Mar 13 12:38:24.257540 master-0 kubenswrapper[3989]: I0313 12:38:24.257230 3989 generic.go:334] "Generic (PLEG): container finished" podID="1ad68c2d-762a-47ed-bd56-e823a83b9087" containerID="99513d1025df40d0dec85b8d387ea2b55e803e627368de7db4825a3613c52248" exitCode=0 Mar 13 12:38:24.258334 master-0 kubenswrapper[3989]: I0313 12:38:24.257617 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerDied","Data":"99513d1025df40d0dec85b8d387ea2b55e803e627368de7db4825a3613c52248"} Mar 13 12:38:24.258334 master-0 kubenswrapper[3989]: I0313 12:38:24.257696 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070"} Mar 13 12:38:24.269073 master-0 kubenswrapper[3989]: I0313 12:38:24.268511 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" event={"ID":"6d1a0616-4479-4621-b042-36a586bd8248","Type":"ContainerStarted","Data":"7f4866307ebdbbe570f92fdb143b9902466b1d4a8ee5d8db99b3a1da2dd69122"} Mar 13 12:38:24.938336 master-0 kubenswrapper[3989]: I0313 12:38:24.937910 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:24.938624 master-0 kubenswrapper[3989]: E0313 12:38:24.938419 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:24.938707 master-0 kubenswrapper[3989]: I0313 12:38:24.937935 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:24.938759 master-0 kubenswrapper[3989]: E0313 12:38:24.938727 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:24.943757 master-0 kubenswrapper[3989]: I0313 12:38:24.943692 3989 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e4c9a0-202f-4cdd-905c-2913d9490e22" path="/var/lib/kubelet/pods/03e4c9a0-202f-4cdd-905c-2913d9490e22/volumes" Mar 13 12:38:24.948709 master-0 kubenswrapper[3989]: I0313 12:38:24.948032 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-wl6w4" podStartSLOduration=4.2191685660000005 podStartE2EDuration="45.948007399s" podCreationTimestamp="2026-03-13 12:37:39 +0000 UTC" firstStartedPulling="2026-03-13 12:37:39.721253771 +0000 UTC m=+63.839721408" lastFinishedPulling="2026-03-13 12:38:21.450092604 +0000 UTC m=+105.568560241" observedRunningTime="2026-03-13 12:38:24.304847826 +0000 UTC m=+108.423315463" watchObservedRunningTime="2026-03-13 12:38:24.948007399 +0000 UTC m=+109.066475036" Mar 13 12:38:24.949027 master-0 kubenswrapper[3989]: I0313 12:38:24.949004 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:38:25.275319 master-0 kubenswrapper[3989]: I0313 12:38:25.275250 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"4cfbe7b49832ca908bc6f3562b2396067e0c5b401670f3ec7faf73d7d52697c7"} Mar 13 12:38:25.275319 master-0 kubenswrapper[3989]: I0313 12:38:25.275301 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"4b42e9799907b879a273b010ca431a81bbd67de6ee97854b96765e2f62f68e5f"} Mar 13 12:38:25.275319 master-0 kubenswrapper[3989]: I0313 12:38:25.275314 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"5aafd1724cc8c4923a70960aa4a74ce0d11ee5c1b9be045b7822ed97649b7eb8"} Mar 13 12:38:25.275319 master-0 kubenswrapper[3989]: I0313 12:38:25.275324 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"5db410ab784919749d0cae0418b67ec4f7219ff2a2f7f80b31d9238e526f49c4"} Mar 13 12:38:25.275319 master-0 kubenswrapper[3989]: I0313 12:38:25.275332 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"f5f478bde69d39ae35bd3b665d5bd1234f1b1121eafbec6ed32aa7fe41c9e034"} Mar 13 12:38:25.275974 master-0 kubenswrapper[3989]: I0313 12:38:25.275341 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"d9014233ff28e3f1b66bc9e63c2c56214cce89ba222bc9b2501d09fe3bcbac69"} Mar 13 12:38:26.742981 master-0 kubenswrapper[3989]: I0313 12:38:26.742882 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:26.743988 master-0 kubenswrapper[3989]: E0313 12:38:26.743101 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 12:38:26.743988 master-0 kubenswrapper[3989]: E0313 12:38:26.743127 3989 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 12:38:26.743988 master-0 kubenswrapper[3989]: E0313 12:38:26.743140 3989 projected.go:194] Error preparing data for projected volume kube-api-access-bk8kt for pod openshift-network-diagnostics/network-check-target-jjmb8: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:26.743988 master-0 kubenswrapper[3989]: E0313 12:38:26.743204 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt podName:70c8b79e-4d29-4ae2-a24f-68595d942442 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:58.743188778 +0000 UTC m=+142.861656415 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-bk8kt" (UniqueName: "kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt") pod "network-check-target-jjmb8" (UID: "70c8b79e-4d29-4ae2-a24f-68595d942442") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 12:38:26.937384 master-0 kubenswrapper[3989]: I0313 12:38:26.937309 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:26.938421 master-0 kubenswrapper[3989]: E0313 12:38:26.937976 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:26.938421 master-0 kubenswrapper[3989]: I0313 12:38:26.938103 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:26.938421 master-0 kubenswrapper[3989]: E0313 12:38:26.938172 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:26.953205 master-0 kubenswrapper[3989]: I0313 12:38:26.953155 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=2.95312074 podStartE2EDuration="2.95312074s" podCreationTimestamp="2026-03-13 12:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:26.951911674 +0000 UTC m=+111.070379331" watchObservedRunningTime="2026-03-13 12:38:26.95312074 +0000 UTC m=+111.071588417" Mar 13 12:38:27.287693 master-0 kubenswrapper[3989]: I0313 12:38:27.287625 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"ba90d423648bcdf32c3d7c496cdf92b98c57b05c5a096f604cc9aa829a9e9fdc"} Mar 13 12:38:28.937849 master-0 kubenswrapper[3989]: I0313 12:38:28.937752 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:28.938798 master-0 kubenswrapper[3989]: I0313 12:38:28.937750 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:28.938798 master-0 kubenswrapper[3989]: E0313 12:38:28.938000 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:28.938798 master-0 kubenswrapper[3989]: E0313 12:38:28.938178 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:29.298018 master-0 kubenswrapper[3989]: I0313 12:38:29.297778 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" event={"ID":"1ad68c2d-762a-47ed-bd56-e823a83b9087","Type":"ContainerStarted","Data":"5a493d6499763257127e94edd79bcda439b4b2033ad71665ebcc3390878b237c"} Mar 13 12:38:29.298286 master-0 kubenswrapper[3989]: I0313 12:38:29.298233 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:29.298286 master-0 kubenswrapper[3989]: I0313 12:38:29.298274 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:29.298286 master-0 kubenswrapper[3989]: I0313 12:38:29.298288 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:29.322550 master-0 kubenswrapper[3989]: I0313 12:38:29.322475 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:29.326146 master-0 kubenswrapper[3989]: I0313 12:38:29.326071 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" podStartSLOduration=6.32604616 podStartE2EDuration="6.32604616s" podCreationTimestamp="2026-03-13 12:38:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:29.323728878 +0000 UTC m=+113.442196525" watchObservedRunningTime="2026-03-13 12:38:29.32604616 +0000 UTC m=+113.444513787" Mar 13 12:38:29.328406 master-0 kubenswrapper[3989]: I0313 12:38:29.326981 3989 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:30.938498 master-0 kubenswrapper[3989]: I0313 12:38:30.937826 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:30.938498 master-0 kubenswrapper[3989]: I0313 12:38:30.938027 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:30.938498 master-0 kubenswrapper[3989]: E0313 12:38:30.938175 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:30.938498 master-0 kubenswrapper[3989]: E0313 12:38:30.938472 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:31.242202 master-0 kubenswrapper[3989]: I0313 12:38:31.242047 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ztpxf"] Mar 13 12:38:31.246830 master-0 kubenswrapper[3989]: I0313 12:38:31.246764 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jjmb8"] Mar 13 12:38:31.302916 master-0 kubenswrapper[3989]: I0313 12:38:31.302852 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:31.302916 master-0 kubenswrapper[3989]: I0313 12:38:31.302906 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:31.303249 master-0 kubenswrapper[3989]: E0313 12:38:31.302985 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:31.303388 master-0 kubenswrapper[3989]: E0313 12:38:31.303341 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:32.938105 master-0 kubenswrapper[3989]: I0313 12:38:32.938025 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:32.938820 master-0 kubenswrapper[3989]: I0313 12:38:32.938056 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:32.938820 master-0 kubenswrapper[3989]: E0313 12:38:32.938250 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:32.938820 master-0 kubenswrapper[3989]: E0313 12:38:32.938355 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:34.939044 master-0 kubenswrapper[3989]: I0313 12:38:34.938649 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:34.939044 master-0 kubenswrapper[3989]: I0313 12:38:34.938689 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:34.940521 master-0 kubenswrapper[3989]: E0313 12:38:34.940479 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:34.940805 master-0 kubenswrapper[3989]: E0313 12:38:34.940771 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:36.487389 master-0 kubenswrapper[3989]: E0313 12:38:36.487316 3989 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Mar 13 12:38:36.937810 master-0 kubenswrapper[3989]: I0313 12:38:36.937698 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:36.937810 master-0 kubenswrapper[3989]: I0313 12:38:36.937780 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:36.938730 master-0 kubenswrapper[3989]: E0313 12:38:36.938660 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:38:36.938992 master-0 kubenswrapper[3989]: E0313 12:38:36.938783 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-jjmb8" podUID="70c8b79e-4d29-4ae2-a24f-68595d942442" Mar 13 12:38:38.938021 master-0 kubenswrapper[3989]: I0313 12:38:38.937921 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:38.938981 master-0 kubenswrapper[3989]: I0313 12:38:38.938070 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:38.940934 master-0 kubenswrapper[3989]: I0313 12:38:38.940880 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:38:38.941535 master-0 kubenswrapper[3989]: I0313 12:38:38.941502 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:38:38.941989 master-0 kubenswrapper[3989]: I0313 12:38:38.941951 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:38:43.965889 master-0 kubenswrapper[3989]: I0313 12:38:43.965549 3989 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 13 12:38:44.110684 master-0 kubenswrapper[3989]: I0313 12:38:44.110595 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:44.111112 master-0 kubenswrapper[3989]: E0313 12:38:44.111080 3989 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:38:44.111312 master-0 kubenswrapper[3989]: E0313 12:38:44.111292 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:48.111255593 +0000 UTC m=+192.229723250 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:38:45.095917 master-0 kubenswrapper[3989]: I0313 12:38:45.095864 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr"] Mar 13 12:38:45.097324 master-0 kubenswrapper[3989]: I0313 12:38:45.097295 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.099952 master-0 kubenswrapper[3989]: I0313 12:38:45.099904 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:38:45.100558 master-0 kubenswrapper[3989]: I0313 12:38:45.100519 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:38:45.102640 master-0 kubenswrapper[3989]: I0313 12:38:45.102593 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:38:45.170146 master-0 kubenswrapper[3989]: I0313 12:38:45.170070 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk"] Mar 13 12:38:45.170700 master-0 kubenswrapper[3989]: I0313 12:38:45.170660 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z"] Mar 13 12:38:45.171057 master-0 kubenswrapper[3989]: I0313 12:38:45.171017 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.171130 master-0 kubenswrapper[3989]: I0313 12:38:45.171092 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882"] Mar 13 12:38:45.171180 master-0 kubenswrapper[3989]: I0313 12:38:45.171143 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.171806 master-0 kubenswrapper[3989]: I0313 12:38:45.171777 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.175042 master-0 kubenswrapper[3989]: I0313 12:38:45.174971 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:38:45.175225 master-0 kubenswrapper[3989]: I0313 12:38:45.175134 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8"] Mar 13 12:38:45.175297 master-0 kubenswrapper[3989]: I0313 12:38:45.175268 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:38:45.175800 master-0 kubenswrapper[3989]: I0313 12:38:45.175736 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b"] Mar 13 12:38:45.176255 master-0 kubenswrapper[3989]: I0313 12:38:45.176213 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr"] Mar 13 12:38:45.176591 master-0 kubenswrapper[3989]: I0313 12:38:45.176545 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.176692 master-0 kubenswrapper[3989]: I0313 12:38:45.176664 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.177337 master-0 kubenswrapper[3989]: I0313 12:38:45.175754 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:38:45.177598 master-0 kubenswrapper[3989]: I0313 12:38:45.175790 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:38:45.177664 master-0 kubenswrapper[3989]: I0313 12:38:45.177631 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.177775 master-0 kubenswrapper[3989]: I0313 12:38:45.175846 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:38:45.177903 master-0 kubenswrapper[3989]: I0313 12:38:45.175868 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:38:45.177983 master-0 kubenswrapper[3989]: I0313 12:38:45.175903 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:38:45.178055 master-0 kubenswrapper[3989]: I0313 12:38:45.176336 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.178105 master-0 kubenswrapper[3989]: I0313 12:38:45.176401 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.178148 master-0 kubenswrapper[3989]: I0313 12:38:45.178117 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-w7mv2"] Mar 13 12:38:45.178190 master-0 kubenswrapper[3989]: I0313 12:38:45.176431 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:38:45.178285 master-0 kubenswrapper[3989]: I0313 12:38:45.176592 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:38:45.178449 master-0 kubenswrapper[3989]: I0313 12:38:45.176761 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:38:45.180793 master-0 kubenswrapper[3989]: I0313 12:38:45.178559 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h"] Mar 13 12:38:45.180793 master-0 kubenswrapper[3989]: I0313 12:38:45.178851 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg"] Mar 13 12:38:45.180793 master-0 kubenswrapper[3989]: I0313 12:38:45.179098 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.180793 master-0 kubenswrapper[3989]: I0313 12:38:45.179476 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.180793 master-0 kubenswrapper[3989]: I0313 12:38:45.179784 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:45.182091 master-0 kubenswrapper[3989]: I0313 12:38:45.182063 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-9nxcz"] Mar 13 12:38:45.182635 master-0 kubenswrapper[3989]: I0313 12:38:45.182612 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.186262 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.186644 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.186951 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187077 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187372 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187494 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187667 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187776 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187898 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.187999 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.188089 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.188484 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.188657 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.188803 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.188944 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.189083 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.189335 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.189433 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:38:45.190063 master-0 kubenswrapper[3989]: I0313 12:38:45.189544 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.190993 master-0 kubenswrapper[3989]: I0313 12:38:45.190909 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.191819 master-0 kubenswrapper[3989]: I0313 12:38:45.191086 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:38:45.191819 master-0 kubenswrapper[3989]: I0313 12:38:45.191195 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:38:45.191819 master-0 kubenswrapper[3989]: I0313 12:38:45.191297 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.192124 master-0 kubenswrapper[3989]: I0313 12:38:45.192087 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf"] Mar 13 12:38:45.192262 master-0 kubenswrapper[3989]: I0313 12:38:45.192237 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:38:45.192628 master-0 kubenswrapper[3989]: I0313 12:38:45.192562 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc"] Mar 13 12:38:45.192836 master-0 kubenswrapper[3989]: I0313 12:38:45.192802 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h"] Mar 13 12:38:45.193030 master-0 kubenswrapper[3989]: I0313 12:38:45.192987 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.193176 master-0 kubenswrapper[3989]: I0313 12:38:45.193145 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.193594 master-0 kubenswrapper[3989]: I0313 12:38:45.193550 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f"] Mar 13 12:38:45.193683 master-0 kubenswrapper[3989]: I0313 12:38:45.193659 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.201787 master-0 kubenswrapper[3989]: I0313 12:38:45.201725 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.208936 master-0 kubenswrapper[3989]: I0313 12:38:45.208882 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.209735 master-0 kubenswrapper[3989]: I0313 12:38:45.209688 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7wnld"] Mar 13 12:38:45.209906 master-0 kubenswrapper[3989]: I0313 12:38:45.209885 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:38:45.210890 master-0 kubenswrapper[3989]: I0313 12:38:45.210860 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:38:45.211196 master-0 kubenswrapper[3989]: I0313 12:38:45.211094 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:38:45.211304 master-0 kubenswrapper[3989]: I0313 12:38:45.211122 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:38:45.211417 master-0 kubenswrapper[3989]: I0313 12:38:45.211397 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:38:45.211536 master-0 kubenswrapper[3989]: I0313 12:38:45.211505 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:38:45.211633 master-0 kubenswrapper[3989]: I0313 12:38:45.211508 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:38:45.212602 master-0 kubenswrapper[3989]: I0313 12:38:45.212565 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:38:45.213003 master-0 kubenswrapper[3989]: I0313 12:38:45.212980 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.213889 master-0 kubenswrapper[3989]: I0313 12:38:45.213852 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:38:45.214004 master-0 kubenswrapper[3989]: I0313 12:38:45.213989 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:38:45.214134 master-0 kubenswrapper[3989]: I0313 12:38:45.214115 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:38:45.214501 master-0 kubenswrapper[3989]: I0313 12:38:45.212568 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:38:45.214599 master-0 kubenswrapper[3989]: I0313 12:38:45.214477 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj"] Mar 13 12:38:45.215373 master-0 kubenswrapper[3989]: I0313 12:38:45.215357 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:38:45.215491 master-0 kubenswrapper[3989]: I0313 12:38:45.215436 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.215642 master-0 kubenswrapper[3989]: I0313 12:38:45.215626 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:38:45.216490 master-0 kubenswrapper[3989]: I0313 12:38:45.216457 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tml9z"] Mar 13 12:38:45.216992 master-0 kubenswrapper[3989]: I0313 12:38:45.216963 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.217299 master-0 kubenswrapper[3989]: I0313 12:38:45.217276 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn"] Mar 13 12:38:45.217560 master-0 kubenswrapper[3989]: I0313 12:38:45.217541 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.217981 master-0 kubenswrapper[3989]: I0313 12:38:45.217965 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt"] Mar 13 12:38:45.218359 master-0 kubenswrapper[3989]: I0313 12:38:45.218342 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.219042 master-0 kubenswrapper[3989]: I0313 12:38:45.219009 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:38:45.219258 master-0 kubenswrapper[3989]: I0313 12:38:45.219237 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:38:45.219384 master-0 kubenswrapper[3989]: I0313 12:38:45.219288 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:38:45.219384 master-0 kubenswrapper[3989]: I0313 12:38:45.219334 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:38:45.219656 master-0 kubenswrapper[3989]: I0313 12:38:45.219507 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:38:45.219656 master-0 kubenswrapper[3989]: I0313 12:38:45.219557 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:38:45.219786 master-0 kubenswrapper[3989]: I0313 12:38:45.219761 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.219850 master-0 kubenswrapper[3989]: I0313 12:38:45.219809 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:38:45.220204 master-0 kubenswrapper[3989]: I0313 12:38:45.220174 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:38:45.220437 master-0 kubenswrapper[3989]: I0313 12:38:45.220414 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h"] Mar 13 12:38:45.220887 master-0 kubenswrapper[3989]: I0313 12:38:45.220807 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.221228 master-0 kubenswrapper[3989]: I0313 12:38:45.221169 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.221228 master-0 kubenswrapper[3989]: I0313 12:38:45.221201 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221230 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221251 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221272 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221283 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221288 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221314 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221331 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.221345 master-0 kubenswrapper[3989]: I0313 12:38:45.221349 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221364 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221396 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221415 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221426 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221431 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221566 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221654 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.221711 master-0 kubenswrapper[3989]: I0313 12:38:45.221698 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221734 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221754 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221769 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221787 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221801 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221818 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221839 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221876 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221901 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.221926 master-0 kubenswrapper[3989]: I0313 12:38:45.221922 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.221951 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.221984 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222036 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222056 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222073 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222090 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222105 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222127 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222161 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222176 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222199 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222235 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222256 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.222552 master-0 kubenswrapper[3989]: I0313 12:38:45.222276 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222306 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222330 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222346 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222366 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222388 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222412 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222429 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222449 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222465 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222487 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222508 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222524 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222547 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.223313 master-0 kubenswrapper[3989]: I0313 12:38:45.222674 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222719 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222764 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222782 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222797 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222815 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.223796 master-0 kubenswrapper[3989]: I0313 12:38:45.222831 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.224483 master-0 kubenswrapper[3989]: I0313 12:38:45.224455 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:38:45.224629 master-0 kubenswrapper[3989]: I0313 12:38:45.224608 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:38:45.224703 master-0 kubenswrapper[3989]: I0313 12:38:45.224659 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:38:45.226473 master-0 kubenswrapper[3989]: I0313 12:38:45.226444 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:38:45.230089 master-0 kubenswrapper[3989]: I0313 12:38:45.226647 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:38:45.230234 master-0 kubenswrapper[3989]: I0313 12:38:45.226680 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:38:45.230322 master-0 kubenswrapper[3989]: I0313 12:38:45.226707 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:38:45.230403 master-0 kubenswrapper[3989]: I0313 12:38:45.226776 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:38:45.230479 master-0 kubenswrapper[3989]: I0313 12:38:45.226800 3989 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:38:45.230559 master-0 kubenswrapper[3989]: I0313 12:38:45.227147 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:38:45.230695 master-0 kubenswrapper[3989]: I0313 12:38:45.230021 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:38:45.310283 master-0 kubenswrapper[3989]: I0313 12:38:45.303893 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:38:45.323372 master-0 kubenswrapper[3989]: I0313 12:38:45.323331 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.323520 master-0 kubenswrapper[3989]: I0313 12:38:45.323404 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.323520 master-0 kubenswrapper[3989]: I0313 12:38:45.323441 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.323713 master-0 kubenswrapper[3989]: I0313 12:38:45.323634 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.323713 master-0 kubenswrapper[3989]: I0313 12:38:45.323679 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.323713 master-0 kubenswrapper[3989]: E0313 12:38:45.323673 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:45.323850 master-0 kubenswrapper[3989]: E0313 12:38:45.323781 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.82375877 +0000 UTC m=+129.942226407 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:45.323850 master-0 kubenswrapper[3989]: E0313 12:38:45.323826 3989 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:45.323937 master-0 kubenswrapper[3989]: I0313 12:38:45.323847 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.323937 master-0 kubenswrapper[3989]: I0313 12:38:45.323888 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.323937 master-0 kubenswrapper[3989]: I0313 12:38:45.323920 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.323959 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.323993 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: E0313 12:38:45.324080 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.82406547 +0000 UTC m=+129.942533107 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: E0313 12:38:45.324291 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: E0313 12:38:45.324462 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.824378579 +0000 UTC m=+129.942846216 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.324837 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.324918 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.324961 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.324986 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.325362 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.325393 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.325421 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.325444 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.328968 master-0 kubenswrapper[3989]: I0313 12:38:45.325466 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325488 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325509 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325526 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325547 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325568 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325610 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.325657 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: E0313 12:38:45.325072 3989 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: E0313 12:38:45.326301 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.826236446 +0000 UTC m=+129.944704173 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326556 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326625 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326659 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326690 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326713 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.329769 master-0 kubenswrapper[3989]: I0313 12:38:45.326737 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326780 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326798 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326821 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326847 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326865 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326887 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326916 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326943 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326965 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.326988 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.327017 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.327038 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.327056 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.327535 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.330402 master-0 kubenswrapper[3989]: I0313 12:38:45.328288 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.328781 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.329101 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.329898 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330006 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330088 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330536 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.328560 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330657 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330687 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330736 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330756 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330778 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330809 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.330886 master-0 kubenswrapper[3989]: I0313 12:38:45.330833 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: E0313 12:38:45.331086 3989 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: I0313 12:38:45.331141 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: E0313 12:38:45.331188 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.831161328 +0000 UTC m=+129.949628965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: I0313 12:38:45.331243 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: I0313 12:38:45.331297 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: I0313 12:38:45.331328 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.331358 master-0 kubenswrapper[3989]: I0313 12:38:45.331353 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331383 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331405 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331425 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331448 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331497 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331521 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331544 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331562 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.331614 master-0 kubenswrapper[3989]: I0313 12:38:45.331605 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.331864 master-0 kubenswrapper[3989]: I0313 12:38:45.331626 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.331986 master-0 kubenswrapper[3989]: I0313 12:38:45.331949 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.332726 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333181 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333355 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333443 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.333573 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333624 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333639 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.333654 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.833633433 +0000 UTC m=+129.952101290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333706 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.333693 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: I0313 12:38:45.334043 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.334379 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.334435 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.834415496 +0000 UTC m=+129.952883133 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.334475 3989 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:45.334872 master-0 kubenswrapper[3989]: E0313 12:38:45.334515 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.834505319 +0000 UTC m=+129.952973146 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.334556 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.334875 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.334955 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.335691 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.336292 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.336423 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.337233 master-0 kubenswrapper[3989]: I0313 12:38:45.336533 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.338608 master-0 kubenswrapper[3989]: I0313 12:38:45.338500 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.339131 master-0 kubenswrapper[3989]: I0313 12:38:45.339080 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.339443 master-0 kubenswrapper[3989]: I0313 12:38:45.339398 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.339515 master-0 kubenswrapper[3989]: I0313 12:38:45.339407 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.342025 master-0 kubenswrapper[3989]: I0313 12:38:45.341990 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.432609 master-0 kubenswrapper[3989]: I0313 12:38:45.432557 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.433050 master-0 kubenswrapper[3989]: I0313 12:38:45.433026 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.433177 master-0 kubenswrapper[3989]: I0313 12:38:45.433161 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.433242 master-0 kubenswrapper[3989]: E0313 12:38:45.432810 3989 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:45.433367 master-0 kubenswrapper[3989]: E0313 12:38:45.433353 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.933325357 +0000 UTC m=+130.051792994 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:45.433551 master-0 kubenswrapper[3989]: I0313 12:38:45.433536 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.433656 master-0 kubenswrapper[3989]: E0313 12:38:45.433636 3989 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:45.433723 master-0 kubenswrapper[3989]: E0313 12:38:45.433675 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.933662308 +0000 UTC m=+130.052129945 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:45.433819 master-0 kubenswrapper[3989]: I0313 12:38:45.433798 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.434451 master-0 kubenswrapper[3989]: I0313 12:38:45.434413 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.434647 master-0 kubenswrapper[3989]: I0313 12:38:45.434618 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.435259 master-0 kubenswrapper[3989]: I0313 12:38:45.435230 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.435373 master-0 kubenswrapper[3989]: I0313 12:38:45.435350 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.436427 master-0 kubenswrapper[3989]: E0313 12:38:45.435861 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:45.436876 master-0 kubenswrapper[3989]: E0313 12:38:45.436859 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:45.936842845 +0000 UTC m=+130.055310482 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:45.477880 master-0 kubenswrapper[3989]: I0313 12:38:45.477455 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr"] Mar 13 12:38:45.551361 master-0 kubenswrapper[3989]: I0313 12:38:45.548679 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk"] Mar 13 12:38:45.551361 master-0 kubenswrapper[3989]: I0313 12:38:45.548739 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z"] Mar 13 12:38:45.551361 master-0 kubenswrapper[3989]: I0313 12:38:45.548752 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b"] Mar 13 12:38:45.551361 master-0 kubenswrapper[3989]: I0313 12:38:45.550792 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8"] Mar 13 12:38:45.551361 master-0 kubenswrapper[3989]: I0313 12:38:45.550947 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h"] Mar 13 12:38:45.551985 master-0 kubenswrapper[3989]: I0313 12:38:45.551933 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-9nxcz"] Mar 13 12:38:45.552819 master-0 kubenswrapper[3989]: I0313 12:38:45.552779 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf"] Mar 13 12:38:45.553667 master-0 kubenswrapper[3989]: I0313 12:38:45.553552 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr"] Mar 13 12:38:45.554652 master-0 kubenswrapper[3989]: I0313 12:38:45.554612 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h"] Mar 13 12:38:45.555558 master-0 kubenswrapper[3989]: I0313 12:38:45.555532 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt"] Mar 13 12:38:45.556423 master-0 kubenswrapper[3989]: I0313 12:38:45.556391 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7wnld"] Mar 13 12:38:45.655605 master-0 kubenswrapper[3989]: I0313 12:38:45.654932 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-w7mv2"] Mar 13 12:38:45.655605 master-0 kubenswrapper[3989]: I0313 12:38:45.655003 3989 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-456r5"] Mar 13 12:38:45.655934 master-0 kubenswrapper[3989]: I0313 12:38:45.655707 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.659812 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg"] Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.659868 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc"] Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.659879 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882"] Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.660054 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.660442 3989 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:38:45.662603 master-0 kubenswrapper[3989]: I0313 12:38:45.661864 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.687622 master-0 kubenswrapper[3989]: I0313 12:38:45.687271 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f"] Mar 13 12:38:45.687622 master-0 kubenswrapper[3989]: I0313 12:38:45.687319 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn"] Mar 13 12:38:45.690519 master-0 kubenswrapper[3989]: I0313 12:38:45.688011 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h"] Mar 13 12:38:45.692304 master-0 kubenswrapper[3989]: I0313 12:38:45.692261 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.692930 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.693097 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.693429 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.693506 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj"] Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.693545 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:38:45.694485 master-0 kubenswrapper[3989]: I0313 12:38:45.693559 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tml9z"] Mar 13 12:38:45.697720 master-0 kubenswrapper[3989]: I0313 12:38:45.697679 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.698696 master-0 kubenswrapper[3989]: I0313 12:38:45.698660 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.699608 master-0 kubenswrapper[3989]: I0313 12:38:45.699499 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.699672 master-0 kubenswrapper[3989]: I0313 12:38:45.699656 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.701632 master-0 kubenswrapper[3989]: I0313 12:38:45.701105 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.701632 master-0 kubenswrapper[3989]: I0313 12:38:45.701131 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.701632 master-0 kubenswrapper[3989]: I0313 12:38:45.701306 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:45.701632 master-0 kubenswrapper[3989]: I0313 12:38:45.701510 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.701922 master-0 kubenswrapper[3989]: I0313 12:38:45.701649 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.701922 master-0 kubenswrapper[3989]: I0313 12:38:45.701865 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.702197 master-0 kubenswrapper[3989]: I0313 12:38:45.702174 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.702670 master-0 kubenswrapper[3989]: I0313 12:38:45.702636 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.703221 master-0 kubenswrapper[3989]: I0313 12:38:45.703184 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.704255 master-0 kubenswrapper[3989]: I0313 12:38:45.703549 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.704255 master-0 kubenswrapper[3989]: I0313 12:38:45.703654 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.704255 master-0 kubenswrapper[3989]: I0313 12:38:45.704035 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.710869 master-0 kubenswrapper[3989]: I0313 12:38:45.710830 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.711074 master-0 kubenswrapper[3989]: I0313 12:38:45.711054 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.751900 master-0 kubenswrapper[3989]: I0313 12:38:45.751855 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:45.763304 master-0 kubenswrapper[3989]: I0313 12:38:45.762522 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.763304 master-0 kubenswrapper[3989]: I0313 12:38:45.762565 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.763304 master-0 kubenswrapper[3989]: I0313 12:38:45.762686 3989 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.763304 master-0 kubenswrapper[3989]: I0313 12:38:45.762854 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:45.771306 master-0 kubenswrapper[3989]: I0313 12:38:45.771089 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:45.787067 master-0 kubenswrapper[3989]: I0313 12:38:45.786948 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:45.820192 master-0 kubenswrapper[3989]: I0313 12:38:45.819413 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:45.827991 master-0 kubenswrapper[3989]: I0313 12:38:45.827796 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.863732 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.863974 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864054 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: E0313 12:38:45.864071 3989 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864097 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864126 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: E0313 12:38:45.864143 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864122505 +0000 UTC m=+130.982590192 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864177 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: E0313 12:38:45.864269 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: E0313 12:38:45.864333 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864313981 +0000 UTC m=+130.982781678 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864357 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864393 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864430 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864470 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: I0313 12:38:45.864500 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:45.864640 master-0 kubenswrapper[3989]: E0313 12:38:45.864618 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864652 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864642821 +0000 UTC m=+130.983110458 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864702 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864755 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864742974 +0000 UTC m=+130.983210691 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864804 3989 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864831 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864823297 +0000 UTC m=+130.983291054 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: I0313 12:38:45.864875 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864937 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.864962 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.864955051 +0000 UTC m=+130.983422778 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.865004 3989 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.865031 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.865022063 +0000 UTC m=+130.983489820 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.865084 3989 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: E0313 12:38:45.865113 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.865104786 +0000 UTC m=+130.983572523 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:45.865568 master-0 kubenswrapper[3989]: I0313 12:38:45.865002 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.905093 master-0 kubenswrapper[3989]: I0313 12:38:45.905062 3989 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:45.909673 master-0 kubenswrapper[3989]: I0313 12:38:45.906916 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:45.927633 master-0 kubenswrapper[3989]: I0313 12:38:45.922673 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:45.965963 master-0 kubenswrapper[3989]: I0313 12:38:45.962863 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:45.966401 master-0 kubenswrapper[3989]: I0313 12:38:45.966115 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:45.966401 master-0 kubenswrapper[3989]: E0313 12:38:45.966302 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:45.967965 master-0 kubenswrapper[3989]: E0313 12:38:45.966380 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.966357228 +0000 UTC m=+131.084824925 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:45.968074 master-0 kubenswrapper[3989]: I0313 12:38:45.968034 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:45.970305 master-0 kubenswrapper[3989]: I0313 12:38:45.968144 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:45.970305 master-0 kubenswrapper[3989]: E0313 12:38:45.968469 3989 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:45.970305 master-0 kubenswrapper[3989]: E0313 12:38:45.968514 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.968500953 +0000 UTC m=+131.086968590 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:45.970305 master-0 kubenswrapper[3989]: E0313 12:38:45.968593 3989 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:45.970305 master-0 kubenswrapper[3989]: E0313 12:38:45.968628 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:46.968618617 +0000 UTC m=+131.087086254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:45.971568 master-0 kubenswrapper[3989]: I0313 12:38:45.970962 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:45.981551 master-0 kubenswrapper[3989]: I0313 12:38:45.981085 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:45.994982 master-0 kubenswrapper[3989]: I0313 12:38:45.994915 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:46.024285 master-0 kubenswrapper[3989]: I0313 12:38:46.023070 3989 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:46.058364 master-0 kubenswrapper[3989]: I0313 12:38:46.052350 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn"] Mar 13 12:38:46.068936 master-0 kubenswrapper[3989]: I0313 12:38:46.066692 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-tml9z"] Mar 13 12:38:46.076635 master-0 kubenswrapper[3989]: W0313 12:38:46.076540 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5ab386_14ed_4610_a08a_54b6de877603.slice/crio-775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787 WatchSource:0}: Error finding container 775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787: Status 404 returned error can't find the container with id 775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787 Mar 13 12:38:46.077762 master-0 kubenswrapper[3989]: I0313 12:38:46.077689 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj"] Mar 13 12:38:46.197661 master-0 kubenswrapper[3989]: I0313 12:38:46.197598 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z"] Mar 13 12:38:46.242238 master-0 kubenswrapper[3989]: I0313 12:38:46.239471 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8"] Mar 13 12:38:46.250833 master-0 kubenswrapper[3989]: W0313 12:38:46.250427 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f66dbf5_722f_4aed_becb_fb1b62ea7fe6.slice/crio-e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a WatchSource:0}: Error finding container e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a: Status 404 returned error can't find the container with id e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a Mar 13 12:38:46.259731 master-0 kubenswrapper[3989]: I0313 12:38:46.259401 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882"] Mar 13 12:38:46.309424 master-0 kubenswrapper[3989]: I0313 12:38:46.309365 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt"] Mar 13 12:38:46.309424 master-0 kubenswrapper[3989]: I0313 12:38:46.309428 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr"] Mar 13 12:38:46.321519 master-0 kubenswrapper[3989]: W0313 12:38:46.321449 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf2a74c2a_8376_4998_bdc6_02a978f1f568.slice/crio-fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e WatchSource:0}: Error finding container fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e: Status 404 returned error can't find the container with id fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e Mar 13 12:38:46.354614 master-0 kubenswrapper[3989]: I0313 12:38:46.351956 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg"] Mar 13 12:38:46.358044 master-0 kubenswrapper[3989]: I0313 12:38:46.357968 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" event={"ID":"0d868028-9984-472a-8403-ffed767e1bf8","Type":"ContainerStarted","Data":"d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a"} Mar 13 12:38:46.361316 master-0 kubenswrapper[3989]: I0313 12:38:46.361277 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" event={"ID":"6e55908e-59f3-45a2-82aa-2616c5a2fd52","Type":"ContainerStarted","Data":"bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a"} Mar 13 12:38:46.361540 master-0 kubenswrapper[3989]: W0313 12:38:46.361491 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod73dc5747_2d30_4a2d_a784_1dea1e10811d.slice/crio-1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9 WatchSource:0}: Error finding container 1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9: Status 404 returned error can't find the container with id 1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9 Mar 13 12:38:46.364472 master-0 kubenswrapper[3989]: I0313 12:38:46.363981 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h"] Mar 13 12:38:46.364648 master-0 kubenswrapper[3989]: I0313 12:38:46.364503 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" event={"ID":"1929440f-f2cc-450d-80ff-ded6788baa74","Type":"ContainerStarted","Data":"ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09"} Mar 13 12:38:46.365443 master-0 kubenswrapper[3989]: I0313 12:38:46.365418 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" event={"ID":"b2ad4825-17fa-4ddd-b21e-334158f1c048","Type":"ContainerStarted","Data":"b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea"} Mar 13 12:38:46.366343 master-0 kubenswrapper[3989]: I0313 12:38:46.366274 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1"} Mar 13 12:38:46.368136 master-0 kubenswrapper[3989]: I0313 12:38:46.368094 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e"} Mar 13 12:38:46.369150 master-0 kubenswrapper[3989]: I0313 12:38:46.369109 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" event={"ID":"603fef71-e0cd-4617-bd8a-a55580578c2f","Type":"ContainerStarted","Data":"ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181"} Mar 13 12:38:46.370389 master-0 kubenswrapper[3989]: I0313 12:38:46.370365 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerStarted","Data":"e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a"} Mar 13 12:38:46.371188 master-0 kubenswrapper[3989]: I0313 12:38:46.371162 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-456r5" event={"ID":"2b5ab386-14ed-4610-a08a-54b6de877603","Type":"ContainerStarted","Data":"775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787"} Mar 13 12:38:46.376426 master-0 kubenswrapper[3989]: I0313 12:38:46.376385 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b"] Mar 13 12:38:46.378169 master-0 kubenswrapper[3989]: W0313 12:38:46.378127 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6a45be0_19ef_4d36_b8a7_eb2705d24bfa.slice/crio-18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c WatchSource:0}: Error finding container 18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c: Status 404 returned error can't find the container with id 18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c Mar 13 12:38:46.390789 master-0 kubenswrapper[3989]: W0313 12:38:46.390660 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54c7efc1_6d89_4831_89d6_6f2812c36c36.slice/crio-142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d WatchSource:0}: Error finding container 142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d: Status 404 returned error can't find the container with id 142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d Mar 13 12:38:46.394412 master-0 kubenswrapper[3989]: E0313 12:38:46.394355 3989 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:copy-catalogd-manifests,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783,Command:[/bin/sh],Args:[-c cp -a /openshift/manifests /operand-assets/catalogd],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:operand-assets,ReadOnly:false,MountPath:/operand-assets,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qttkt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000330000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cluster-olm-operator-77899cf6d-zt57b_openshift-cluster-olm-operator(54c7efc1-6d89-4831-89d6-6f2812c36c36): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Mar 13 12:38:46.395696 master-0 kubenswrapper[3989]: E0313 12:38:46.395616 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ErrImagePull: \"pull QPS exceeded\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" podUID="54c7efc1-6d89-4831-89d6-6f2812c36c36" Mar 13 12:38:46.463306 master-0 kubenswrapper[3989]: I0313 12:38:46.463157 3989 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc"] Mar 13 12:38:46.469680 master-0 kubenswrapper[3989]: W0313 12:38:46.469620 3989 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod684c9067_189a_4f50_ac8d_97111aa73d9c.slice/crio-aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814 WatchSource:0}: Error finding container aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814: Status 404 returned error can't find the container with id aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814 Mar 13 12:38:46.880055 master-0 kubenswrapper[3989]: I0313 12:38:46.879863 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:46.880055 master-0 kubenswrapper[3989]: I0313 12:38:46.879935 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: E0313 12:38:46.880105 3989 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: I0313 12:38:46.880178 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: E0313 12:38:46.880248 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880223545 +0000 UTC m=+132.998691222 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: E0313 12:38:46.880262 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: I0313 12:38:46.880288 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:46.880325 master-0 kubenswrapper[3989]: E0313 12:38:46.880318 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880303038 +0000 UTC m=+132.998770675 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: E0313 12:38:46.880347 3989 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: E0313 12:38:46.880413 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880394621 +0000 UTC m=+132.998862308 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: E0313 12:38:46.880439 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: I0313 12:38:46.880443 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: E0313 12:38:46.880465 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880457573 +0000 UTC m=+132.998925210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:46.880526 master-0 kubenswrapper[3989]: I0313 12:38:46.880502 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: I0313 12:38:46.880538 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: I0313 12:38:46.880560 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880607 3989 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880633 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880625918 +0000 UTC m=+132.999093555 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880681 3989 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880719 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880709621 +0000 UTC m=+132.999177338 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880739 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880748 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:46.880963 master-0 kubenswrapper[3989]: E0313 12:38:46.880879 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.880870205 +0000 UTC m=+132.999337842 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:46.881285 master-0 kubenswrapper[3989]: E0313 12:38:46.881096 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.881071772 +0000 UTC m=+132.999539409 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:46.982112 master-0 kubenswrapper[3989]: I0313 12:38:46.982031 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:46.982402 master-0 kubenswrapper[3989]: I0313 12:38:46.982245 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:46.982402 master-0 kubenswrapper[3989]: I0313 12:38:46.982328 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:46.982724 master-0 kubenswrapper[3989]: E0313 12:38:46.982678 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:46.982820 master-0 kubenswrapper[3989]: E0313 12:38:46.982786 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.982764688 +0000 UTC m=+133.101232325 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:46.983218 master-0 kubenswrapper[3989]: E0313 12:38:46.983177 3989 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:46.983281 master-0 kubenswrapper[3989]: E0313 12:38:46.983251 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.983234501 +0000 UTC m=+133.101702138 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:46.983339 master-0 kubenswrapper[3989]: E0313 12:38:46.983290 3989 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:46.983339 master-0 kubenswrapper[3989]: E0313 12:38:46.983322 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:48.983312284 +0000 UTC m=+133.101779921 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:47.580060 master-0 kubenswrapper[3989]: I0313 12:38:47.580000 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerStarted","Data":"1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9"} Mar 13 12:38:47.584480 master-0 kubenswrapper[3989]: I0313 12:38:47.584288 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerStarted","Data":"142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d"} Mar 13 12:38:47.592269 master-0 kubenswrapper[3989]: I0313 12:38:47.592070 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" event={"ID":"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa","Type":"ContainerStarted","Data":"18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c"} Mar 13 12:38:47.595436 master-0 kubenswrapper[3989]: E0313 12:38:47.595387 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" podUID="54c7efc1-6d89-4831-89d6-6f2812c36c36" Mar 13 12:38:47.602937 master-0 kubenswrapper[3989]: I0313 12:38:47.602151 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerStarted","Data":"59914b16ce26e359fa0f8c879d562000e5c33058f6a9e4b5ad9002af5b9b5469"} Mar 13 12:38:47.602937 master-0 kubenswrapper[3989]: I0313 12:38:47.602211 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerStarted","Data":"aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814"} Mar 13 12:38:47.650089 master-0 kubenswrapper[3989]: I0313 12:38:47.650009 3989 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" podStartSLOduration=95.649986229 podStartE2EDuration="1m35.649986229s" podCreationTimestamp="2026-03-13 12:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:38:47.647687919 +0000 UTC m=+131.766155566" watchObservedRunningTime="2026-03-13 12:38:47.649986229 +0000 UTC m=+131.768453866" Mar 13 12:38:48.611274 master-0 kubenswrapper[3989]: E0313 12:38:48.609204 3989 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"copy-catalogd-manifests\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"\"" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" podUID="54c7efc1-6d89-4831-89d6-6f2812c36c36" Mar 13 12:38:48.923406 master-0 kubenswrapper[3989]: I0313 12:38:48.923002 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:48.923601 master-0 kubenswrapper[3989]: I0313 12:38:48.923419 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:48.923601 master-0 kubenswrapper[3989]: E0313 12:38:48.923222 3989 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:48.923601 master-0 kubenswrapper[3989]: E0313 12:38:48.923568 3989 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:48.923732 master-0 kubenswrapper[3989]: E0313 12:38:48.923654 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:48.923732 master-0 kubenswrapper[3989]: E0313 12:38:48.923569 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.923540486 +0000 UTC m=+137.042008193 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:48.923732 master-0 kubenswrapper[3989]: I0313 12:38:48.923473 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:48.923841 master-0 kubenswrapper[3989]: E0313 12:38:48.923706 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.92367629 +0000 UTC m=+137.042143927 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:48.923918 master-0 kubenswrapper[3989]: I0313 12:38:48.923876 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:48.923964 master-0 kubenswrapper[3989]: E0313 12:38:48.923903 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.923860446 +0000 UTC m=+137.042328133 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:48.924207 master-0 kubenswrapper[3989]: E0313 12:38:48.924047 3989 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:48.924207 master-0 kubenswrapper[3989]: E0313 12:38:48.924124 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.924101134 +0000 UTC m=+137.042568841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:48.924286 master-0 kubenswrapper[3989]: I0313 12:38:48.924205 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:48.924286 master-0 kubenswrapper[3989]: I0313 12:38:48.924262 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:48.924428 master-0 kubenswrapper[3989]: I0313 12:38:48.924398 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:48.924690 master-0 kubenswrapper[3989]: I0313 12:38:48.924641 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:48.924755 master-0 kubenswrapper[3989]: E0313 12:38:48.924654 3989 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:48.924755 master-0 kubenswrapper[3989]: E0313 12:38:48.924733 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:48.924832 master-0 kubenswrapper[3989]: E0313 12:38:48.924760 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.924746974 +0000 UTC m=+137.043214671 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:48.924875 master-0 kubenswrapper[3989]: E0313 12:38:48.924845 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.924802945 +0000 UTC m=+137.043270642 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:48.924875 master-0 kubenswrapper[3989]: E0313 12:38:48.924853 3989 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:48.925003 master-0 kubenswrapper[3989]: E0313 12:38:48.924911 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.924900258 +0000 UTC m=+137.043367895 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:48.925003 master-0 kubenswrapper[3989]: E0313 12:38:48.924863 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:48.925003 master-0 kubenswrapper[3989]: E0313 12:38:48.924949 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:52.92494242 +0000 UTC m=+137.043410057 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:49.026557 master-0 kubenswrapper[3989]: I0313 12:38:49.026447 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:49.026793 master-0 kubenswrapper[3989]: I0313 12:38:49.026566 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:49.026793 master-0 kubenswrapper[3989]: E0313 12:38:49.026617 3989 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:49.026874 master-0 kubenswrapper[3989]: E0313 12:38:49.026778 3989 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:49.026874 master-0 kubenswrapper[3989]: E0313 12:38:49.026843 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:53.026815481 +0000 UTC m=+137.145283178 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:49.026957 master-0 kubenswrapper[3989]: E0313 12:38:49.026905 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:53.026859362 +0000 UTC m=+137.145327059 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:49.026997 master-0 kubenswrapper[3989]: I0313 12:38:49.026950 3989 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:49.027185 master-0 kubenswrapper[3989]: E0313 12:38:49.027079 3989 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:49.028723 master-0 kubenswrapper[3989]: E0313 12:38:49.028622 3989 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:53.028570555 +0000 UTC m=+137.147038192 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:49.923855 master-0 kubenswrapper[3989]: I0313 12:38:49.923775 3989 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="9cb3e3949a1bb640329a4953a85d4530ae11d656b3ce5bea3323fa6af6e8d03b" exitCode=0 Mar 13 12:38:49.923855 master-0 kubenswrapper[3989]: I0313 12:38:49.923839 3989 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerDied","Data":"9cb3e3949a1bb640329a4953a85d4530ae11d656b3ce5bea3323fa6af6e8d03b"} Mar 13 12:38:52.557877 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 12:38:52.583335 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:38:52.583722 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 12:38:52.584199 master-0 systemd[1]: kubelet.service: Consumed 13.262s CPU time. Mar 13 12:38:52.599881 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:38:52.719022 master-0 kubenswrapper[6980]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:38:52.719022 master-0 kubenswrapper[6980]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:38:52.719022 master-0 kubenswrapper[6980]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:38:52.719022 master-0 kubenswrapper[6980]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:38:52.719022 master-0 kubenswrapper[6980]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:38:52.720504 master-0 kubenswrapper[6980]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:38:52.720504 master-0 kubenswrapper[6980]: I0313 12:38:52.719154 6980 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.723958 6980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.723995 6980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724001 6980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724006 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724010 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724015 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724020 6980 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:38:52.724013 master-0 kubenswrapper[6980]: W0313 12:38:52.724024 6980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724029 6980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724040 6980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724044 6980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724048 6980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724052 6980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724058 6980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724063 6980 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724068 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724073 6980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724077 6980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724081 6980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724085 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724089 6980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724093 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724097 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724101 6980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724105 6980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724109 6980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:38:52.724318 master-0 kubenswrapper[6980]: W0313 12:38:52.724112 6980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724116 6980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724119 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724123 6980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724127 6980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724131 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724134 6980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724138 6980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724148 6980 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724152 6980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724156 6980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724169 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724174 6980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724183 6980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724187 6980 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724195 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724200 6980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724208 6980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724211 6980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724215 6980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:38:52.725000 master-0 kubenswrapper[6980]: W0313 12:38:52.724219 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724226 6980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724231 6980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724242 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724249 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724254 6980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724259 6980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724263 6980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724268 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724273 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724277 6980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724282 6980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724286 6980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724290 6980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724295 6980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724299 6980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724304 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724309 6980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724313 6980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:38:52.726061 master-0 kubenswrapper[6980]: W0313 12:38:52.724317 6980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724322 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724328 6980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724333 6980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724382 6980 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724388 6980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: W0313 12:38:52.724393 6980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724539 6980 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724553 6980 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724561 6980 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724568 6980 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724592 6980 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724603 6980 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724614 6980 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724620 6980 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724624 6980 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724629 6980 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724636 6980 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724641 6980 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724647 6980 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724651 6980 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724656 6980 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724661 6980 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:38:52.726706 master-0 kubenswrapper[6980]: I0313 12:38:52.724665 6980 flags.go:64] FLAG: --cloud-config="" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724670 6980 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724674 6980 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724681 6980 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724686 6980 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724690 6980 flags.go:64] FLAG: --config-dir="" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724695 6980 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724700 6980 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724707 6980 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724712 6980 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724717 6980 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724722 6980 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724727 6980 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724741 6980 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724750 6980 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724760 6980 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724764 6980 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724775 6980 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724780 6980 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724785 6980 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724789 6980 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724794 6980 flags.go:64] FLAG: --enable-server="true" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724799 6980 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724805 6980 flags.go:64] FLAG: --event-burst="100" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724810 6980 flags.go:64] FLAG: --event-qps="50" Mar 13 12:38:52.727448 master-0 kubenswrapper[6980]: I0313 12:38:52.724815 6980 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724819 6980 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724824 6980 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724830 6980 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724835 6980 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724840 6980 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724845 6980 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724849 6980 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724854 6980 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724858 6980 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724863 6980 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724868 6980 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724872 6980 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724876 6980 flags.go:64] FLAG: --feature-gates="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724882 6980 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724887 6980 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724895 6980 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724901 6980 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724913 6980 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724920 6980 flags.go:64] FLAG: --help="false" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724932 6980 flags.go:64] FLAG: --hostname-override="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724937 6980 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724947 6980 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724955 6980 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724960 6980 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:38:52.728342 master-0 kubenswrapper[6980]: I0313 12:38:52.724965 6980 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724969 6980 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724974 6980 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724979 6980 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724983 6980 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724988 6980 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724993 6980 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.724998 6980 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725003 6980 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725007 6980 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725012 6980 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725017 6980 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725021 6980 flags.go:64] FLAG: --lock-file="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725026 6980 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725031 6980 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725037 6980 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725052 6980 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725057 6980 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725063 6980 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725067 6980 flags.go:64] FLAG: --logging-format="text" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725072 6980 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725078 6980 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725089 6980 flags.go:64] FLAG: --manifest-url="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725098 6980 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725108 6980 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:38:52.729075 master-0 kubenswrapper[6980]: I0313 12:38:52.725113 6980 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725119 6980 flags.go:64] FLAG: --max-pods="110" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725124 6980 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725129 6980 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725133 6980 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725138 6980 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725142 6980 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725152 6980 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725156 6980 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725168 6980 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725172 6980 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725177 6980 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725181 6980 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725186 6980 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725195 6980 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725199 6980 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725204 6980 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725208 6980 flags.go:64] FLAG: --port="10250" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725213 6980 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725217 6980 flags.go:64] FLAG: --provider-id="" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725222 6980 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725226 6980 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725231 6980 flags.go:64] FLAG: --register-node="true" Mar 13 12:38:52.729845 master-0 kubenswrapper[6980]: I0313 12:38:52.725235 6980 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725241 6980 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725250 6980 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725254 6980 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725259 6980 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725264 6980 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725270 6980 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725282 6980 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725287 6980 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725296 6980 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725305 6980 flags.go:64] FLAG: --runonce="false" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725309 6980 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725314 6980 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725322 6980 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725327 6980 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725332 6980 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725336 6980 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725341 6980 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725346 6980 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725350 6980 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725355 6980 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725359 6980 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725364 6980 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725369 6980 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725373 6980 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:38:52.730648 master-0 kubenswrapper[6980]: I0313 12:38:52.725378 6980 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725385 6980 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725389 6980 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725394 6980 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725400 6980 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725404 6980 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725408 6980 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725412 6980 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725417 6980 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725422 6980 flags.go:64] FLAG: --v="2" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725433 6980 flags.go:64] FLAG: --version="false" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725439 6980 flags.go:64] FLAG: --vmodule="" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725445 6980 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: I0313 12:38:52.725450 6980 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725560 6980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725567 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725571 6980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725590 6980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725594 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725598 6980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725602 6980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725605 6980 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725609 6980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:38:52.731473 master-0 kubenswrapper[6980]: W0313 12:38:52.725613 6980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725617 6980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725621 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725625 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725629 6980 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725632 6980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725636 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725641 6980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725646 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725659 6980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725664 6980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725668 6980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725672 6980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725677 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725681 6980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725686 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725690 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725694 6980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725698 6980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:38:52.732135 master-0 kubenswrapper[6980]: W0313 12:38:52.725702 6980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725706 6980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725710 6980 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725713 6980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725717 6980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725721 6980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725724 6980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725728 6980 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725732 6980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725736 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725739 6980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725743 6980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725747 6980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725751 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725754 6980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725758 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725765 6980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725768 6980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725772 6980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725776 6980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:38:52.732746 master-0 kubenswrapper[6980]: W0313 12:38:52.725780 6980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725784 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725790 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725795 6980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725800 6980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725803 6980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725807 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725811 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725815 6980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725818 6980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725822 6980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725827 6980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725832 6980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725836 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725840 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725844 6980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725849 6980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725853 6980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725857 6980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:38:52.733530 master-0 kubenswrapper[6980]: W0313 12:38:52.725860 6980 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.725865 6980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.725870 6980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.725873 6980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.725878 6980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: I0313 12:38:52.725893 6980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: I0313 12:38:52.733623 6980 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: I0313 12:38:52.733661 6980 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733736 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733745 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733749 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733753 6980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733758 6980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733762 6980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733766 6980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:38:52.734098 master-0 kubenswrapper[6980]: W0313 12:38:52.733770 6980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733774 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733778 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733782 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733786 6980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733790 6980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733793 6980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733797 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733800 6980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733804 6980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733808 6980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733812 6980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733815 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733819 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733823 6980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733826 6980 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733830 6980 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733834 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733837 6980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733848 6980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:38:52.734519 master-0 kubenswrapper[6980]: W0313 12:38:52.733852 6980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733861 6980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733869 6980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733875 6980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733882 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733886 6980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733895 6980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733899 6980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733903 6980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733907 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733911 6980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733914 6980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733918 6980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733922 6980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733926 6980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733929 6980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733933 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733936 6980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733940 6980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:38:52.735146 master-0 kubenswrapper[6980]: W0313 12:38:52.733944 6980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733948 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733954 6980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733961 6980 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733971 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733976 6980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733981 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733985 6980 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733990 6980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.733995 6980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734000 6980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734004 6980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734012 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734020 6980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734028 6980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734034 6980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734043 6980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734049 6980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734055 6980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:38:52.735798 master-0 kubenswrapper[6980]: W0313 12:38:52.734062 6980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734067 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734072 6980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734077 6980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734081 6980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734087 6980 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734091 6980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: I0313 12:38:52.734099 6980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734266 6980 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734283 6980 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734291 6980 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734297 6980 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734302 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734307 6980 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734312 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:38:52.736536 master-0 kubenswrapper[6980]: W0313 12:38:52.734318 6980 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734325 6980 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734334 6980 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734339 6980 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734344 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734349 6980 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734353 6980 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734358 6980 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734365 6980 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734372 6980 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734378 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734395 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734402 6980 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734411 6980 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734416 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734421 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734429 6980 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734434 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734439 6980 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:38:52.736966 master-0 kubenswrapper[6980]: W0313 12:38:52.734443 6980 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734448 6980 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734453 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734462 6980 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734467 6980 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734471 6980 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734475 6980 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734479 6980 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734485 6980 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734490 6980 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734495 6980 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734500 6980 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734504 6980 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734510 6980 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734514 6980 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734519 6980 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734524 6980 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734529 6980 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734534 6980 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734538 6980 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:38:52.737486 master-0 kubenswrapper[6980]: W0313 12:38:52.734543 6980 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734547 6980 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734551 6980 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734556 6980 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734561 6980 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734566 6980 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734587 6980 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734594 6980 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734600 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734605 6980 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734610 6980 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734614 6980 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734619 6980 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734624 6980 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734629 6980 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734634 6980 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734639 6980 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734643 6980 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734650 6980 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734656 6980 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:38:52.738717 master-0 kubenswrapper[6980]: W0313 12:38:52.734661 6980 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: W0313 12:38:52.734665 6980 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: W0313 12:38:52.734669 6980 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: W0313 12:38:52.734674 6980 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: W0313 12:38:52.734679 6980 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: W0313 12:38:52.734684 6980 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.734692 6980 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.734915 6980 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.737407 6980 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.737515 6980 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.737767 6980 server.go:997] "Starting client certificate rotation" Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.737786 6980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.737989 6980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 07:01:15.359161042 +0000 UTC Mar 13 12:38:52.739467 master-0 kubenswrapper[6980]: I0313 12:38:52.738091 6980 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h22m22.621072459s for next certificate rotation Mar 13 12:38:52.740085 master-0 kubenswrapper[6980]: I0313 12:38:52.739291 6980 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:38:52.740928 master-0 kubenswrapper[6980]: I0313 12:38:52.740903 6980 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:38:52.744043 master-0 kubenswrapper[6980]: I0313 12:38:52.743996 6980 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:38:52.750166 master-0 kubenswrapper[6980]: I0313 12:38:52.750090 6980 log.go:25] "Validated CRI v1 image API" Mar 13 12:38:52.751697 master-0 kubenswrapper[6980]: I0313 12:38:52.751674 6980 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:38:52.755312 master-0 kubenswrapper[6980]: I0313 12:38:52.755247 6980 fs.go:135] Filesystem UUIDs: map[1540ec0a-5f02-47ef-9901-1615d58a2814:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 12:38:52.755644 master-0 kubenswrapper[6980]: I0313 12:38:52.755296 6980 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq:{mountpoint:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm:{mountpoint:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr:{mountpoint:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz:{mountpoint:/var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp:{mountpoint:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh:{mountpoint:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb:{mountpoint:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt:{mountpoint:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h:{mountpoint:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn:{mountpoint:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2:{mountpoint:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2 major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx:{mountpoint:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j:{mountpoint:/var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm:{mountpoint:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht:{mountpoint:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld:{mountpoint:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc:{mountpoint:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb:{mountpoint:/var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9:{mountpoint:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8:{mountpoint:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx:{mountpoint:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e3eb38e0-d8b5-46fc-809d-73791d569816/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/e3eb38e0-d8b5-46fc-809d-73791d569816/volumes/kubernetes.io~projected/kube-api-access major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv:{mountpoint:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph:{mountpoint:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n:{mountpoint:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl:{mountpoint:/var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl major:0 minor:99 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/a28fa2cc999e6ea267ed23f28dba3465fc9522d5cf6be0687960b641050df30e/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/29ba9a740bc3f512656a313f274e7e1f23e91e2bb2d95f6a187ab3b7abfbde21/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/16d96c773637708d6b7cab625fe0da3dd37225d87ec882aa4cc1a487fd0590df/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/dca38fd48b5ebeacfd26d2b32201ac9e0db764064e13ff5a00ed52918ed5204a/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/a28c1a43a8547db9a0970c5d40a83a9590954264934b31d8f1eb937718b2995c/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bc43f43227f34389d2225dd542635315c87bd2060e3b14a0bf159a8238fa938c/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/b1e093b896e85aca9f6371b7751b9dee79cf60bbe371c27d459138d316d7e810/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/4a4cd543c084de798a40be1c0757a8d0ca430d4e811311f3751f22a8041977ea/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/34792a753031d2349d674e51920fe61bdd9cd73077ef9e2228a625524025e99a/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/c62f25ef2a9658370707b91e96aae9af71a8f442a0fb71f1df5598b041b7aa31/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/d0b4f1fafc8d674b2e3696136e08786964ff7f982afdbfbbe567935922adc564/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/6ee126ff3b6054be33f571ea0c8d65b9da1316a6e227bc106cfbad6e5dcf07a1/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/e3cd5619f1d5d3812fea03e41b695ac2d487f88fd76c63bc9afefd9c11971acb/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/f8ac89c418e5cd7b53ee530847fc36ac6d27de85738be9c29b422e56d63f8595/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/7d0042807484a593a0aee58be2989d5316b59ed7228c6159f9f9dfd985612e59/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/45da25be612c4f2c20d4c58f76749be983448743ebd14e3d70de9eca9a39cbb3/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/fc8978852098c9a607d9c5942035fe5ac1e042470c7b64b1771ad0a6282e98cf/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/d0be7809054f0b60db4b4ca8174a6a091335cc62801e3e2660a6fcdb39f0fcd5/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/bdacbd9f67c1ae841356cc5505cfe7cabb61f23ddc9afd8f2426edb56dc1e16d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/77c2ac9f214366281fedd6b4c93acb5d53850f9988f0ba9879ddbaa2ae47fccb/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/f0473b891cd17d54078edc9714413874ed4ec5a8c54e81f18ecf072655f27187/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/bae226474312c4f074428cabf7ca44ae701721671afabf5eaa90f1d615ade77b/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/9bceed6807491d97f03d49ee87c3e80e13b76232d7fb414cd4a4186875a89a23/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f5c5507be6da33615108db90c7fd8d7d1b74cc3bc3ea00ec4477eec67e94d4a3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/cafc4c9bccd073d6b92f98229988f17782bb3289ba17e5872026c9200cff879a/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/b8a34d525fc6d0eeafb6ac11156dd5fd2dad3e276f992b01f2a7f1f81a42f0cf/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/364b2e2dbdd1e2e410f7f2ea963c2b6abfacf34c8591527f04940dc9adfecb2e/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/9681aba4a3d724eeec30885cb8754b7ca9e603ccbb924ae3d8f8b7be6ca85241/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f70103bfc1ccf98bf8c9717afb805dc2ee414e099d480d15a2bb29756c4d1d9e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/1232804f7100b4c67264884b68e33f2a5ae4f93dcc131e7dccc108f0697cde12/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/80643ba69c27e2091aafe5dbec882a5c922dfe53fc07f4be35a343f8e7899ab8/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/9485f212a05b901c03857187eb7abaff4c2b0ccea34f3aba96e71d73474052c0/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/b371a8539515b4358a080def996b2442173ccd541ff829f84c61fae539c7cf28/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/af237823629abe3f0f5ebb9f3cb2fb8ba99eefeac234c5a15f9b44b3c42c68e5/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/3c95633d82688147b6541405cfd3822778b05a952d6b67c4cd194b43406cf964/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/2b71080b413cc5017b5b8dab9f091b853071fe9a037fdfd3d10e9c682f844d9a/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/87f58434821697b05b741ead6c15660b37752d5725a717c1002ab057da14b1b6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/51147c177173dc1bf1a4aff6041fc0b8aae6171e7ade6f399a8985ac69622364/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/c79b32f8bc49841a0d6c157c3a0abb4153a59921426ae6b27e2a6a10a25f3d5a/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/90b131dcd256f10464f3cbe95bb1f8420eef5f195fb6828b2e4566a1c8c88055/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/11ea667b8da90c10db4848e3c40ea98ea30f7afcc6db1f4cdf0a59b481f6b4ba/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/5fda47e76f79f9786a567a8497279fe3d0f80eec34c16e43d8a415551c7d2fca/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/f8bc6ba807065a838a0a189e0f8dcf31c7f17846cac101e88b33c7bdb4ed2461/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/ba6851b2610ce478d39da55008f09969f99eda6c9ecac7e59126cab86f6fb55b/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/36c015573696df7c31d992e14fa06a5b04e4024c28adcddc32f33a858ce875dd/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/c862d9dbda5efa0ea483cb7508c39a7de7203854324f27558dea3b48813ae4b0/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-83:{mountpoint:/var/lib/containers/storage/overlay/3dd0575158d536f7f71e71e57ed87c9722ff80e9c59108c57076e81f1a6dfe36/merged major:0 minor:83 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/6d6fc62105ee1654d1dc603aa58ca8abc44a5951824878d0b3daa0a309fe12ee/merged major:0 minor:89 fsType:overlay blockSize:0}] Mar 13 12:38:52.784466 master-0 kubenswrapper[6980]: I0313 12:38:52.783340 6980 manager.go:217] Machine: {Timestamp:2026-03-13 12:38:52.782457882 +0000 UTC m=+0.116452518 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:fe2021b5fe9941cbb2f9ca5654d6ac6f SystemUUID:fe2021b5-fe99-41cb-b2f9-ca5654d6ac6f BootID:1315907d-16f0-44fe-950e-68be880afcd6 Filesystems:[{Device:overlay_0-83 DeviceMajor:0 DeviceMinor:83 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8 DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e3eb38e0-d8b5-46fc-809d-73791d569816/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9 DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh DeviceMajor:0 DeviceMinor:153 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2 DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:147 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp DeviceMajor:0 DeviceMinor:91 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:142a3bdc9b5ff21 MacAddress:72:a1:3d:ec:a3:06 Speed:10000 Mtu:8900} {Name:18a8f8a3e194d3c MacAddress:d2:60:78:cb:35:cc Speed:10000 Mtu:8900} {Name:1d9516f705e1b86 MacAddress:36:0d:b4:55:8e:48 Speed:10000 Mtu:8900} {Name:30c48665c997060 MacAddress:b6:72:85:9a:ef:6c Speed:10000 Mtu:8900} {Name:aca574d944d0c95 MacAddress:2a:e7:67:30:3c:4b Speed:10000 Mtu:8900} {Name:ae713e76b592ab4 MacAddress:86:a1:b0:38:e9:eb Speed:10000 Mtu:8900} {Name:b3ecbff0b1ffe2e MacAddress:7a:e2:cb:02:e9:31 Speed:10000 Mtu:8900} {Name:bf47ad2a6c4b47e MacAddress:c2:f2:1d:53:39:e8 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:e2:2f:c5:ea:9f:67 Speed:0 Mtu:8900} {Name:d536b99e9f1c4d3 MacAddress:72:31:af:24:3d:79 Speed:10000 Mtu:8900} {Name:e6d943705af2ecd MacAddress:da:fb:e7:9b:2f:3a Speed:10000 Mtu:8900} {Name:ef7730594563bab MacAddress:5a:b7:71:29:81:e8 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fc:21:de Speed:-1 Mtu:9000} {Name:fea61f96ae5a58f MacAddress:d2:b0:03:ea:b8:ce Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:f6:fc:d3:7e:3d:76 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:38:52.784466 master-0 kubenswrapper[6980]: I0313 12:38:52.784350 6980 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:38:52.785062 master-0 kubenswrapper[6980]: I0313 12:38:52.784520 6980 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:38:52.785613 master-0 kubenswrapper[6980]: I0313 12:38:52.785545 6980 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:38:52.786005 master-0 kubenswrapper[6980]: I0313 12:38:52.785942 6980 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:38:52.786667 master-0 kubenswrapper[6980]: I0313 12:38:52.786031 6980 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:38:52.786950 master-0 kubenswrapper[6980]: I0313 12:38:52.786723 6980 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:38:52.786950 master-0 kubenswrapper[6980]: I0313 12:38:52.786742 6980 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:38:52.786950 master-0 kubenswrapper[6980]: I0313 12:38:52.786772 6980 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:38:52.786950 master-0 kubenswrapper[6980]: I0313 12:38:52.786812 6980 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:38:52.787153 master-0 kubenswrapper[6980]: I0313 12:38:52.787127 6980 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:38:52.787287 master-0 kubenswrapper[6980]: I0313 12:38:52.787260 6980 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:38:52.787557 master-0 kubenswrapper[6980]: I0313 12:38:52.787532 6980 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:38:52.787557 master-0 kubenswrapper[6980]: I0313 12:38:52.787554 6980 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:38:52.787639 master-0 kubenswrapper[6980]: I0313 12:38:52.787594 6980 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:38:52.787639 master-0 kubenswrapper[6980]: I0313 12:38:52.787613 6980 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:38:52.787639 master-0 kubenswrapper[6980]: I0313 12:38:52.787639 6980 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:38:52.797267 master-0 kubenswrapper[6980]: I0313 12:38:52.797158 6980 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:38:52.797508 master-0 kubenswrapper[6980]: I0313 12:38:52.797477 6980 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 12:38:52.797864 master-0 kubenswrapper[6980]: I0313 12:38:52.797832 6980 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:38:52.798094 master-0 kubenswrapper[6980]: I0313 12:38:52.798064 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:38:52.798129 master-0 kubenswrapper[6980]: I0313 12:38:52.798094 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:38:52.798129 master-0 kubenswrapper[6980]: I0313 12:38:52.798106 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:38:52.798129 master-0 kubenswrapper[6980]: I0313 12:38:52.798115 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:38:52.798129 master-0 kubenswrapper[6980]: I0313 12:38:52.798123 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:38:52.798129 master-0 kubenswrapper[6980]: I0313 12:38:52.798132 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798141 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798151 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798160 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798168 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798180 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798195 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:38:52.798285 master-0 kubenswrapper[6980]: I0313 12:38:52.798250 6980 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:38:52.799807 master-0 kubenswrapper[6980]: I0313 12:38:52.799773 6980 server.go:1280] "Started kubelet" Mar 13 12:38:52.799936 master-0 kubenswrapper[6980]: I0313 12:38:52.799899 6980 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:38:52.800663 master-0 kubenswrapper[6980]: I0313 12:38:52.800491 6980 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:38:52.800726 master-0 kubenswrapper[6980]: I0313 12:38:52.800702 6980 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:38:52.801411 master-0 kubenswrapper[6980]: I0313 12:38:52.801373 6980 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:38:52.803124 master-0 kubenswrapper[6980]: I0313 12:38:52.802333 6980 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:38:52.804939 master-0 kubenswrapper[6980]: I0313 12:38:52.804770 6980 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:38:52.804939 master-0 kubenswrapper[6980]: I0313 12:38:52.804900 6980 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:38:52.807646 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:38:52.813673 master-0 kubenswrapper[6980]: I0313 12:38:52.813516 6980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:38:52.813673 master-0 kubenswrapper[6980]: I0313 12:38:52.813611 6980 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:38:52.813827 master-0 kubenswrapper[6980]: I0313 12:38:52.813658 6980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 08:51:04.537897375 +0000 UTC Mar 13 12:38:52.813827 master-0 kubenswrapper[6980]: I0313 12:38:52.813714 6980 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h12m11.724187535s for next certificate rotation Mar 13 12:38:52.814629 master-0 kubenswrapper[6980]: I0313 12:38:52.814032 6980 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:38:52.814629 master-0 kubenswrapper[6980]: I0313 12:38:52.814049 6980 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:38:52.814629 master-0 kubenswrapper[6980]: I0313 12:38:52.814194 6980 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:38:52.818103 master-0 kubenswrapper[6980]: I0313 12:38:52.818011 6980 factory.go:55] Registering systemd factory Mar 13 12:38:52.818401 master-0 kubenswrapper[6980]: I0313 12:38:52.818356 6980 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:38:52.818503 master-0 kubenswrapper[6980]: I0313 12:38:52.818409 6980 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:38:52.818873 master-0 kubenswrapper[6980]: I0313 12:38:52.818841 6980 factory.go:153] Registering CRI-O factory Mar 13 12:38:52.818873 master-0 kubenswrapper[6980]: I0313 12:38:52.818875 6980 factory.go:221] Registration of the crio container factory successfully Mar 13 12:38:52.818969 master-0 kubenswrapper[6980]: I0313 12:38:52.818954 6980 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:38:52.819016 master-0 kubenswrapper[6980]: I0313 12:38:52.818987 6980 factory.go:103] Registering Raw factory Mar 13 12:38:52.819016 master-0 kubenswrapper[6980]: I0313 12:38:52.819005 6980 manager.go:1196] Started watching for new ooms in manager Mar 13 12:38:52.819540 master-0 kubenswrapper[6980]: I0313 12:38:52.819504 6980 manager.go:319] Starting recovery of all containers Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823434 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823544 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823592 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823604 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823615 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3eb38e0-d8b5-46fc-809d-73791d569816" volumeName="kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823624 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823655 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823666 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6a45be0-19ef-4d36-b8a7-eb2705d24bfa" volumeName="kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823679 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823690 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823698 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823709 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823762 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823773 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:38:52.823803 master-0 kubenswrapper[6980]: I0313 12:38:52.823782 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823831 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e4e773c-d970-4f5e-9172-c1ebdb41888d" volumeName="kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823857 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71b741d4-3899-4d31-afd1-72f5a9321f75" volumeName="kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823869 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b5ab386-14ed-4610-a08a-54b6de877603" volumeName="kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823904 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823916 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823925 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823935 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20217cff-2f81-4a56-9c15-28385c19258c" volumeName="kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823945 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823954 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823987 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.823997 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824031 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824065 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824078 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824087 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824097 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824176 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824200 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824210 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85ab8ab-f9f1-47ad-9c96-9498cef92474" volumeName="kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824257 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824277 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824292 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824335 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824352 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a5976df-0366-47b3-bc54-1ba7c249e87c" volumeName="kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824365 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f942fce-07a9-4377-8330-c6249a5a8b24" volumeName="kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824376 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824432 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824455 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824464 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824498 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b5ab386-14ed-4610-a08a-54b6de877603" volumeName="kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824509 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824519 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824536 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824568 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824597 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824611 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config" seLinuxMountContext="" Mar 13 12:38:52.824554 master-0 kubenswrapper[6980]: I0313 12:38:52.824623 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824664 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824677 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824688 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824699 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824710 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824744 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824755 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824768 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824778 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824790 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824822 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824833 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824845 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824855 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824864 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c9773d-7e88-4e30-9b8a-792a869a860e" volumeName="kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824895 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824907 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e4e773c-d970-4f5e-9172-c1ebdb41888d" volumeName="kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824915 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824926 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824936 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824948 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" volumeName="kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824980 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.824991 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825000 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8226ffac-1f76-4eaa-ada5-056b5fd031b4" volumeName="kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825011 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825021 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825048 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825067 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" volumeName="kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825083 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825095 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71b741d4-3899-4d31-afd1-72f5a9321f75" volumeName="kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825108 6980 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e3eb38e0-d8b5-46fc-809d-73791d569816" volumeName="kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca" seLinuxMountContext="" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825144 6980 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:38:52.827982 master-0 kubenswrapper[6980]: I0313 12:38:52.825152 6980 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:38:52.829642 master-0 kubenswrapper[6980]: I0313 12:38:52.828079 6980 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 12:38:52.855702 master-0 kubenswrapper[6980]: I0313 12:38:52.855364 6980 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:38:52.858409 master-0 kubenswrapper[6980]: I0313 12:38:52.858373 6980 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:38:52.858521 master-0 kubenswrapper[6980]: I0313 12:38:52.858440 6980 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:38:52.858521 master-0 kubenswrapper[6980]: I0313 12:38:52.858470 6980 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:38:52.858644 master-0 kubenswrapper[6980]: E0313 12:38:52.858536 6980 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:38:52.860487 master-0 kubenswrapper[6980]: I0313 12:38:52.860459 6980 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872475 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="0552724532a0871797536a0fa5461171eaa5b983641df0c9e3100001409bbe97" exitCode=0 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872516 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="261cbab4cc990a283086b5578b976b53ce06514cd8246e1d92485867a0760ce8" exitCode=0 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872525 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="1b6e1b00449d4ad0069d761f09fd31eb925ff8c4773bf223a962c96f72589083" exitCode=0 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872532 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="68ab991a1ca1a43041140e5538bac0164a9cb6cf676c5102e75b42f612a72d9d" exitCode=0 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872539 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="9caf396b8c5078621fb7d9a89a4bf5d4e00c4dccbb5c00252204a9ac1a3b5d3b" exitCode=0 Mar 13 12:38:52.872531 master-0 kubenswrapper[6980]: I0313 12:38:52.872546 6980 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="18a2b972b6d690603207972c9280fdef39401c1fb14724697481249e3cdd3fe3" exitCode=0 Mar 13 12:38:52.875143 master-0 kubenswrapper[6980]: I0313 12:38:52.875051 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:38:52.875607 master-0 kubenswrapper[6980]: I0313 12:38:52.875560 6980 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" exitCode=1 Mar 13 12:38:52.875607 master-0 kubenswrapper[6980]: I0313 12:38:52.875594 6980 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700" exitCode=0 Mar 13 12:38:52.891083 master-0 kubenswrapper[6980]: I0313 12:38:52.891023 6980 generic.go:334] "Generic (PLEG): container finished" podID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerID="9f4ddd8b81aa8e6f6453e9d79c9c9826152b36b58f325733cabc91a77b93f83c" exitCode=0 Mar 13 12:38:52.893657 master-0 kubenswrapper[6980]: I0313 12:38:52.893567 6980 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6" exitCode=0 Mar 13 12:38:52.897121 master-0 kubenswrapper[6980]: I0313 12:38:52.897074 6980 generic.go:334] "Generic (PLEG): container finished" podID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerID="3d699a661192c0fe629e3652881a79b8980021e82a7bc93d27f3ce7bd63fd41d" exitCode=0 Mar 13 12:38:52.918012 master-0 kubenswrapper[6980]: I0313 12:38:52.917961 6980 generic.go:334] "Generic (PLEG): container finished" podID="1ad68c2d-762a-47ed-bd56-e823a83b9087" containerID="99513d1025df40d0dec85b8d387ea2b55e803e627368de7db4825a3613c52248" exitCode=0 Mar 13 12:38:52.927374 master-0 kubenswrapper[6980]: I0313 12:38:52.927295 6980 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="9cb3e3949a1bb640329a4953a85d4530ae11d656b3ce5bea3323fa6af6e8d03b" exitCode=0 Mar 13 12:38:52.943066 master-0 kubenswrapper[6980]: I0313 12:38:52.943025 6980 manager.go:324] Recovery completed Mar 13 12:38:52.958952 master-0 kubenswrapper[6980]: E0313 12:38:52.958878 6980 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:38:52.988646 master-0 kubenswrapper[6980]: I0313 12:38:52.988561 6980 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:38:52.988646 master-0 kubenswrapper[6980]: I0313 12:38:52.988612 6980 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:38:52.988646 master-0 kubenswrapper[6980]: I0313 12:38:52.988653 6980 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:38:52.988978 master-0 kubenswrapper[6980]: I0313 12:38:52.988900 6980 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 12:38:52.988978 master-0 kubenswrapper[6980]: I0313 12:38:52.988916 6980 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 12:38:52.988978 master-0 kubenswrapper[6980]: I0313 12:38:52.988954 6980 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 12:38:52.988978 master-0 kubenswrapper[6980]: I0313 12:38:52.988966 6980 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 12:38:52.989155 master-0 kubenswrapper[6980]: I0313 12:38:52.988985 6980 policy_none.go:49] "None policy: Start" Mar 13 12:38:52.991129 master-0 kubenswrapper[6980]: I0313 12:38:52.991089 6980 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:38:52.991215 master-0 kubenswrapper[6980]: I0313 12:38:52.991167 6980 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:38:52.991690 master-0 kubenswrapper[6980]: I0313 12:38:52.991663 6980 state_mem.go:75] "Updated machine memory state" Mar 13 12:38:52.991758 master-0 kubenswrapper[6980]: I0313 12:38:52.991691 6980 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 12:38:53.003205 master-0 kubenswrapper[6980]: I0313 12:38:53.003149 6980 manager.go:334] "Starting Device Plugin manager" Mar 13 12:38:53.003463 master-0 kubenswrapper[6980]: I0313 12:38:53.003419 6980 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:38:53.003463 master-0 kubenswrapper[6980]: I0313 12:38:53.003443 6980 server.go:79] "Starting device plugin registration server" Mar 13 12:38:53.004048 master-0 kubenswrapper[6980]: I0313 12:38:53.004018 6980 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:38:53.004111 master-0 kubenswrapper[6980]: I0313 12:38:53.004048 6980 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:38:53.004721 master-0 kubenswrapper[6980]: I0313 12:38:53.004689 6980 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:38:53.004854 master-0 kubenswrapper[6980]: I0313 12:38:53.004811 6980 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:38:53.004854 master-0 kubenswrapper[6980]: I0313 12:38:53.004849 6980 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:38:53.105043 master-0 kubenswrapper[6980]: I0313 12:38:53.104630 6980 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:38:53.107556 master-0 kubenswrapper[6980]: I0313 12:38:53.107510 6980 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:38:53.107556 master-0 kubenswrapper[6980]: I0313 12:38:53.107539 6980 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:38:53.107556 master-0 kubenswrapper[6980]: I0313 12:38:53.107547 6980 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:38:53.107746 master-0 kubenswrapper[6980]: I0313 12:38:53.107606 6980 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:38:53.117232 master-0 kubenswrapper[6980]: I0313 12:38:53.117175 6980 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 12:38:53.117433 master-0 kubenswrapper[6980]: I0313 12:38:53.117294 6980 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:38:53.159643 master-0 kubenswrapper[6980]: I0313 12:38:53.159220 6980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:38:53.160186 master-0 kubenswrapper[6980]: I0313 12:38:53.159903 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"ed78e1786123e1fdf666e037202049096483e9131a9b2ba5d12c1d669373c1fa"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160759 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160827 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160845 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160902 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160916 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04"} Mar 13 12:38:53.160922 master-0 kubenswrapper[6980]: I0313 12:38:53.160931 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.160942 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.160974 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.160994 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ba036bffe5621c444daa0bd1c229eac4d583082b0f6956a7bb655f8664a38947" Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161004 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"63e03be6775769ad765af20dfd2ac68f1e500a160a4e77eda15bd7fdcfe1bc2a"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161014 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"9cc438a36a13c0e2e1f239bcab312b0eda7119d2153cef22f48639612d94c13e"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161022 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161032 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161045 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c01927a76a297da5840d73eff9921d3c26cf5f0e7c0b06e61b8b4a6964b05b8" Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161088 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161108 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d"} Mar 13 12:38:53.162073 master-0 kubenswrapper[6980]: I0313 12:38:53.161120 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"d5dc6c6e80f445d51122ad5b527a93180bba8d53bfd02a0ec172defc7ab4ca77"} Mar 13 12:38:53.176484 master-0 kubenswrapper[6980]: E0313 12:38:53.176421 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.176724 master-0 kubenswrapper[6980]: E0313 12:38:53.176436 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.177876 master-0 kubenswrapper[6980]: E0313 12:38:53.177825 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.177990 master-0 kubenswrapper[6980]: E0313 12:38:53.177900 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.179662 master-0 kubenswrapper[6980]: W0313 12:38:53.179627 6980 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:38:53.179779 master-0 kubenswrapper[6980]: E0313 12:38:53.179690 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.229881 master-0 kubenswrapper[6980]: I0313 12:38:53.229778 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.229881 master-0 kubenswrapper[6980]: I0313 12:38:53.229839 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.229881 master-0 kubenswrapper[6980]: I0313 12:38:53.229883 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.229908 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.229925 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.229943 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.229964 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.229988 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230009 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230041 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230071 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230097 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230137 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230183 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230218 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.230226 master-0 kubenswrapper[6980]: I0313 12:38:53.230236 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.230698 master-0 kubenswrapper[6980]: I0313 12:38:53.230256 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.330704 master-0 kubenswrapper[6980]: I0313 12:38:53.330611 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330755 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330847 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330877 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330899 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330919 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.330951 master-0 kubenswrapper[6980]: I0313 12:38:53.330939 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.330959 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331024 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331050 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331073 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331107 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331137 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331161 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331181 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331202 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.331226 master-0 kubenswrapper[6980]: I0313 12:38:53.331227 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331247 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331287 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331326 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331359 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331391 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331500 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331548 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331713 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331755 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331790 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:53.331838 master-0 kubenswrapper[6980]: I0313 12:38:53.331821 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.331872 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.331904 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.331955 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.331987 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.332017 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:53.332496 master-0 kubenswrapper[6980]: I0313 12:38:53.332049 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:53.789078 master-0 kubenswrapper[6980]: I0313 12:38:53.788958 6980 apiserver.go:52] "Watching apiserver" Mar 13 12:38:53.798257 master-0 kubenswrapper[6980]: I0313 12:38:53.798198 6980 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:38:53.799435 master-0 kubenswrapper[6980]: I0313 12:38:53.799379 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-multus/multus-admission-controller-8d675b596-pbgd4","openshift-multus/network-metrics-daemon-ztpxf","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf","kube-system/bootstrap-kube-scheduler-master-0","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg","openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h","openshift-ovn-kubernetes/ovnkube-node-vlrf6","openshift-dns-operator/dns-operator-589895fbb7-w7mv2","openshift-ingress-operator/ingress-operator-677db989d6-9nxcz","openshift-network-diagnostics/network-check-target-jjmb8","openshift-network-operator/iptables-alerter-456r5","openshift-multus/multus-additional-cni-plugins-wl6w4","assisted-installer/assisted-installer-controller-7vm6x","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h","openshift-network-node-identity/network-node-identity-kb5r7","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj","openshift-multus/multus-6c7r9","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf","openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg","openshift-config-operator/openshift-config-operator-64488f9d78-tml9z","openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn","openshift-marketplace/marketplace-operator-64bf9778cb-7wnld","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk","openshift-network-operator/network-operator-7c649bf6d4-fcthv","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h"] Mar 13 12:38:53.799879 master-0 kubenswrapper[6980]: I0313 12:38:53.799844 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:38:53.799979 master-0 kubenswrapper[6980]: I0313 12:38:53.799948 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.801901 master-0 kubenswrapper[6980]: I0313 12:38:53.801499 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:53.801901 master-0 kubenswrapper[6980]: I0313 12:38:53.801529 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:53.806424 master-0 kubenswrapper[6980]: I0313 12:38:53.802101 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.806424 master-0 kubenswrapper[6980]: I0313 12:38:53.802829 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.806424 master-0 kubenswrapper[6980]: I0313 12:38:53.804688 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.810643 master-0 kubenswrapper[6980]: I0313 12:38:53.809068 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:53.812048 master-0 kubenswrapper[6980]: I0313 12:38:53.812015 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:53.812148 master-0 kubenswrapper[6980]: I0313 12:38:53.812108 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.814421 master-0 kubenswrapper[6980]: I0313 12:38:53.814088 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:53.814421 master-0 kubenswrapper[6980]: I0313 12:38:53.814217 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:53.814991 master-0 kubenswrapper[6980]: I0313 12:38:53.814944 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:53.816792 master-0 kubenswrapper[6980]: I0313 12:38:53.816199 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.821980 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822019 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.821985 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822068 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822116 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.821996 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822179 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822250 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822278 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:38:53.822393 master-0 kubenswrapper[6980]: I0313 12:38:53.822281 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:38:53.823555 master-0 kubenswrapper[6980]: I0313 12:38:53.823524 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:38:53.824606 master-0 kubenswrapper[6980]: I0313 12:38:53.824570 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824686 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824721 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824772 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824784 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824826 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824853 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824883 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824835 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.825065 master-0 kubenswrapper[6980]: I0313 12:38:53.824885 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:38:53.825479 master-0 kubenswrapper[6980]: I0313 12:38:53.825380 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:38:53.825479 master-0 kubenswrapper[6980]: I0313 12:38:53.825473 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:38:53.825591 master-0 kubenswrapper[6980]: I0313 12:38:53.825551 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:38:53.825688 master-0 kubenswrapper[6980]: I0313 12:38:53.825667 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:38:53.825797 master-0 kubenswrapper[6980]: I0313 12:38:53.825773 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:38:53.826160 master-0 kubenswrapper[6980]: I0313 12:38:53.825806 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.826160 master-0 kubenswrapper[6980]: I0313 12:38:53.825865 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.826160 master-0 kubenswrapper[6980]: I0313 12:38:53.825869 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:38:53.826160 master-0 kubenswrapper[6980]: I0313 12:38:53.826034 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:38:53.826160 master-0 kubenswrapper[6980]: I0313 12:38:53.826075 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:38:53.826461 master-0 kubenswrapper[6980]: I0313 12:38:53.826424 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:38:53.826619 master-0 kubenswrapper[6980]: I0313 12:38:53.826598 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.827223 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.827329 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.827627 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.827689 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.827710 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.828005 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:38:53.828143 master-0 kubenswrapper[6980]: I0313 12:38:53.828024 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:38:53.828450 master-0 kubenswrapper[6980]: I0313 12:38:53.828219 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.828450 master-0 kubenswrapper[6980]: I0313 12:38:53.828245 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.828695 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829025 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829161 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829205 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829553 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829681 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.829788 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830138 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830272 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830399 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830429 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830438 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830463 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830488 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830542 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830592 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830602 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830357 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830288 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830372 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830727 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830635 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830823 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830823 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.830825 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.831051 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.831066 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.831106 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.831129 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:38:53.831997 master-0 kubenswrapper[6980]: I0313 12:38:53.831146 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.833640 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.831170 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.833726 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.833809 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.833847 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.833893 master-0 kubenswrapper[6980]: I0313 12:38:53.833875 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.834165 master-0 kubenswrapper[6980]: I0313 12:38:53.833995 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.834210 master-0 kubenswrapper[6980]: I0313 12:38:53.834159 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:53.834261 master-0 kubenswrapper[6980]: I0313 12:38:53.834247 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.834879 master-0 kubenswrapper[6980]: I0313 12:38:53.831233 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:38:53.834879 master-0 kubenswrapper[6980]: I0313 12:38:53.831274 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:38:53.834879 master-0 kubenswrapper[6980]: I0313 12:38:53.831303 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:38:53.834879 master-0 kubenswrapper[6980]: I0313 12:38:53.831389 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:38:53.835071 master-0 kubenswrapper[6980]: I0313 12:38:53.831391 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:38:53.835071 master-0 kubenswrapper[6980]: I0313 12:38:53.831397 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:38:53.835157 master-0 kubenswrapper[6980]: I0313 12:38:53.831452 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:38:53.835208 master-0 kubenswrapper[6980]: I0313 12:38:53.831501 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.831569 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.832919 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.833923 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.834054 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.834178 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.835703 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.834255 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.834268 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.836276 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.836538 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.836696 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.837075 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:38:53.837309 master-0 kubenswrapper[6980]: I0313 12:38:53.837315 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:38:53.839084 master-0 kubenswrapper[6980]: I0313 12:38:53.838721 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:38:53.839944 master-0 kubenswrapper[6980]: I0313 12:38:53.839781 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.839944 master-0 kubenswrapper[6980]: I0313 12:38:53.839856 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:53.839944 master-0 kubenswrapper[6980]: I0313 12:38:53.839923 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.840135 master-0 kubenswrapper[6980]: I0313 12:38:53.839993 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.840135 master-0 kubenswrapper[6980]: I0313 12:38:53.839965 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:38:53.840429 master-0 kubenswrapper[6980]: I0313 12:38:53.840041 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.840429 master-0 kubenswrapper[6980]: I0313 12:38:53.840292 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:53.840429 master-0 kubenswrapper[6980]: I0313 12:38:53.840346 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:53.840429 master-0 kubenswrapper[6980]: I0313 12:38:53.840383 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.840429 master-0 kubenswrapper[6980]: I0313 12:38:53.840410 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:53.840631 master-0 kubenswrapper[6980]: I0313 12:38:53.840530 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:53.840672 master-0 kubenswrapper[6980]: I0313 12:38:53.840624 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:53.840672 master-0 kubenswrapper[6980]: I0313 12:38:53.840658 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:53.840728 master-0 kubenswrapper[6980]: I0313 12:38:53.840703 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.840783 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.840835 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.840893 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.840979 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.841020 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.841062 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.841098 master-0 kubenswrapper[6980]: I0313 12:38:53.841104 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841148 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841192 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841276 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841334 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841376 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.841434 master-0 kubenswrapper[6980]: I0313 12:38:53.841407 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.841749 master-0 kubenswrapper[6980]: I0313 12:38:53.841497 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:53.841749 master-0 kubenswrapper[6980]: I0313 12:38:53.841612 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.841749 master-0 kubenswrapper[6980]: I0313 12:38:53.841675 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.841749 master-0 kubenswrapper[6980]: I0313 12:38:53.841709 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.841930 master-0 kubenswrapper[6980]: I0313 12:38:53.841731 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.841930 master-0 kubenswrapper[6980]: I0313 12:38:53.841745 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.841930 master-0 kubenswrapper[6980]: I0313 12:38:53.841819 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.842086 master-0 kubenswrapper[6980]: I0313 12:38:53.841939 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.842086 master-0 kubenswrapper[6980]: I0313 12:38:53.841953 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:53.842086 master-0 kubenswrapper[6980]: I0313 12:38:53.841959 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.842086 master-0 kubenswrapper[6980]: I0313 12:38:53.841971 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.842327 master-0 kubenswrapper[6980]: I0313 12:38:53.842157 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:38:53.842327 master-0 kubenswrapper[6980]: I0313 12:38:53.842190 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.842327 master-0 kubenswrapper[6980]: I0313 12:38:53.842192 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.842327 master-0 kubenswrapper[6980]: I0313 12:38:53.842238 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.842327 master-0 kubenswrapper[6980]: I0313 12:38:53.842316 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842357 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842388 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842413 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842465 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842511 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.842561 master-0 kubenswrapper[6980]: I0313 12:38:53.842523 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:53.842851 master-0 kubenswrapper[6980]: I0313 12:38:53.842785 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.842851 master-0 kubenswrapper[6980]: I0313 12:38:53.842823 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:53.843145 master-0 kubenswrapper[6980]: I0313 12:38:53.843000 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.843145 master-0 kubenswrapper[6980]: I0313 12:38:53.843118 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.843263 master-0 kubenswrapper[6980]: I0313 12:38:53.843191 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.843320 master-0 kubenswrapper[6980]: I0313 12:38:53.843281 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.843362 master-0 kubenswrapper[6980]: I0313 12:38:53.843310 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:53.843362 master-0 kubenswrapper[6980]: I0313 12:38:53.843355 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843420 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843529 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843599 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843568 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843616 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843792 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843797 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843868 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843893 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843921 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.843980 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844103 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844159 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844188 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844276 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844277 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844481 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844522 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844591 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.844633 master-0 kubenswrapper[6980]: I0313 12:38:53.844592 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.844717 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.844858 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.844885 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.844935 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.844990 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845029 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845082 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845273 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845114 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845276 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845309 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.846321 master-0 kubenswrapper[6980]: I0313 12:38:53.845335 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.847998 master-0 kubenswrapper[6980]: I0313 12:38:53.847030 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:38:53.851709 master-0 kubenswrapper[6980]: I0313 12:38:53.851656 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.851797 master-0 kubenswrapper[6980]: I0313 12:38:53.851717 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.851797 master-0 kubenswrapper[6980]: I0313 12:38:53.851738 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.851991 master-0 kubenswrapper[6980]: I0313 12:38:53.851965 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.852217 master-0 kubenswrapper[6980]: I0313 12:38:53.852174 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.852284 master-0 kubenswrapper[6980]: I0313 12:38:53.852226 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:53.852537 master-0 kubenswrapper[6980]: I0313 12:38:53.852306 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:53.852618 master-0 kubenswrapper[6980]: I0313 12:38:53.852553 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.852662 master-0 kubenswrapper[6980]: I0313 12:38:53.852625 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.852701 master-0 kubenswrapper[6980]: I0313 12:38:53.852673 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.852765 master-0 kubenswrapper[6980]: I0313 12:38:53.852701 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:53.852765 master-0 kubenswrapper[6980]: I0313 12:38:53.852709 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:38:53.852765 master-0 kubenswrapper[6980]: I0313 12:38:53.852735 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:53.852871 master-0 kubenswrapper[6980]: I0313 12:38:53.852766 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.853046 master-0 kubenswrapper[6980]: I0313 12:38:53.853021 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:53.853335 master-0 kubenswrapper[6980]: I0313 12:38:53.853314 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.854919 master-0 kubenswrapper[6980]: I0313 12:38:53.854893 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.854919 master-0 kubenswrapper[6980]: I0313 12:38:53.854908 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:53.855043 master-0 kubenswrapper[6980]: I0313 12:38:53.854933 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.855043 master-0 kubenswrapper[6980]: I0313 12:38:53.854967 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.855043 master-0 kubenswrapper[6980]: I0313 12:38:53.854989 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:53.855043 master-0 kubenswrapper[6980]: I0313 12:38:53.855023 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.855236 master-0 kubenswrapper[6980]: I0313 12:38:53.855080 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.855236 master-0 kubenswrapper[6980]: I0313 12:38:53.855101 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.855236 master-0 kubenswrapper[6980]: I0313 12:38:53.855101 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.855236 master-0 kubenswrapper[6980]: I0313 12:38:53.855187 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855253 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855289 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855316 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855322 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855344 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855374 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855389 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:53.855406 master-0 kubenswrapper[6980]: I0313 12:38:53.855403 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855441 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855497 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855527 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855623 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855652 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855657 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855695 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855720 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.855758 master-0 kubenswrapper[6980]: I0313 12:38:53.855757 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.855768 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.856112 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.856154 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.856201 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.856190 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.856256 master-0 kubenswrapper[6980]: I0313 12:38:53.856224 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856266 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856333 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856374 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856407 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856481 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856555 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856625 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856657 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856786 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856824 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856867 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856900 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856941 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857018 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.856993 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857161 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857197 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857359 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857456 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857499 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857541 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857551 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857590 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857701 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857742 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857769 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857832 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857934 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.858030 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.858185 master-0 kubenswrapper[6980]: I0313 12:38:53.857975 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:53.860472 master-0 kubenswrapper[6980]: I0313 12:38:53.858392 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:53.860472 master-0 kubenswrapper[6980]: I0313 12:38:53.858454 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.860472 master-0 kubenswrapper[6980]: I0313 12:38:53.859024 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:38:53.860472 master-0 kubenswrapper[6980]: I0313 12:38:53.860287 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:38:53.861885 master-0 kubenswrapper[6980]: I0313 12:38:53.861842 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.863792 master-0 kubenswrapper[6980]: I0313 12:38:53.863763 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.868069 master-0 kubenswrapper[6980]: I0313 12:38:53.868032 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:53.869040 master-0 kubenswrapper[6980]: I0313 12:38:53.869015 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.871890 master-0 kubenswrapper[6980]: I0313 12:38:53.871741 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:38:53.891256 master-0 kubenswrapper[6980]: I0313 12:38:53.891222 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:38:53.911674 master-0 kubenswrapper[6980]: I0313 12:38:53.911555 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:38:53.915193 master-0 kubenswrapper[6980]: I0313 12:38:53.915169 6980 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:38:53.916786 master-0 kubenswrapper[6980]: I0313 12:38:53.916759 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.964966 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965078 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965196 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965315 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965349 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965459 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965560 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965676 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965737 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965920 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.965992 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966030 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966085 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966156 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966221 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966327 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966381 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966440 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966488 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966591 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966636 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966675 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.966677 master-0 kubenswrapper[6980]: I0313 12:38:53.966733 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.966872 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.966937 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.966981 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967038 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967096 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967141 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967203 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967267 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967304 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.968605 master-0 kubenswrapper[6980]: I0313 12:38:53.967353 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.975944 master-0 kubenswrapper[6980]: I0313 12:38:53.975903 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976054 master-0 kubenswrapper[6980]: I0313 12:38:53.975973 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:53.976054 master-0 kubenswrapper[6980]: E0313 12:38:53.975992 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:53.976156 master-0 kubenswrapper[6980]: E0313 12:38:53.976106 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:38:53.976156 master-0 kubenswrapper[6980]: E0313 12:38:53.976118 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.476091132 +0000 UTC m=+1.810085928 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:53.976156 master-0 kubenswrapper[6980]: E0313 12:38:53.976162 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.476144293 +0000 UTC m=+1.810138919 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:38:53.976156 master-0 kubenswrapper[6980]: I0313 12:38:53.976160 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976194 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976234 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976253 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976274 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976321 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976339 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: I0313 12:38:53.976373 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976389 master-0 kubenswrapper[6980]: E0313 12:38:53.976385 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976404 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976425 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: E0313 12:38:53.976434 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.476421052 +0000 UTC m=+1.810415728 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976437 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976450 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976470 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976474 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976523 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976568 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976632 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976666 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976711 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: E0313 12:38:53.976106 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976195 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: I0313 12:38:53.976797 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.976875 master-0 kubenswrapper[6980]: E0313 12:38:53.976841 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: E0313 12:38:53.976990 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977058 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977073 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977116 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977116 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977127 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977144 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977150 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977187 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977214 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977240 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977274 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: E0313 12:38:53.977289 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977325 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977332 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: E0313 12:38:53.977385 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: E0313 12:38:53.977404 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977456 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977491 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.977566 master-0 kubenswrapper[6980]: I0313 12:38:53.977495 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.977776 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.476759171 +0000 UTC m=+1.810753857 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.977857 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.477842405 +0000 UTC m=+1.811837031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.977891 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.477882226 +0000 UTC m=+1.811876882 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.977961 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.477949288 +0000 UTC m=+1.811943924 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.977999 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.477990909 +0000 UTC m=+1.811985596 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.978016 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.47800885 +0000 UTC m=+1.812003476 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978057 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978195 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978317 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.978151 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978350 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978398 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978317 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: E0313 12:38:53.978400 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.478374971 +0000 UTC m=+1.812369617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978468 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.978675 master-0 kubenswrapper[6980]: I0313 12:38:53.978642 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978755 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978816 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978837 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978856 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978874 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978890 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.978916 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.978944 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.978958 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.478946849 +0000 UTC m=+1.812941555 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.978975 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.47896688 +0000 UTC m=+1.812961696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978983 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.978999 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.979064 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.979127 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: I0313 12:38:53.979147 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:53.979194 master-0 kubenswrapper[6980]: E0313 12:38:53.979158 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:54.479148825 +0000 UTC m=+1.813143451 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:53.981357 master-0 kubenswrapper[6980]: I0313 12:38:53.981333 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:53.993386 master-0 kubenswrapper[6980]: I0313 12:38:53.993332 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:38:54.004556 master-0 kubenswrapper[6980]: I0313 12:38:54.004496 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:38:54.021511 master-0 kubenswrapper[6980]: I0313 12:38:54.021457 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:38:54.043656 master-0 kubenswrapper[6980]: I0313 12:38:54.043438 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:54.063603 master-0 kubenswrapper[6980]: I0313 12:38:54.063502 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:38:54.080758 master-0 kubenswrapper[6980]: I0313 12:38:54.080686 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:54.081096 master-0 kubenswrapper[6980]: I0313 12:38:54.080797 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:54.084552 master-0 kubenswrapper[6980]: I0313 12:38:54.084501 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:38:54.103317 master-0 kubenswrapper[6980]: I0313 12:38:54.103257 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:54.110520 master-0 kubenswrapper[6980]: I0313 12:38:54.107426 6980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:38:54.123198 master-0 kubenswrapper[6980]: I0313 12:38:54.123169 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:54.143813 master-0 kubenswrapper[6980]: I0313 12:38:54.143763 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:38:54.164181 master-0 kubenswrapper[6980]: I0313 12:38:54.164127 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:54.186486 master-0 kubenswrapper[6980]: I0313 12:38:54.186435 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:38:54.205273 master-0 kubenswrapper[6980]: I0313 12:38:54.205200 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:38:54.224528 master-0 kubenswrapper[6980]: I0313 12:38:54.224482 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:54.246915 master-0 kubenswrapper[6980]: I0313 12:38:54.246827 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:38:54.264482 master-0 kubenswrapper[6980]: I0313 12:38:54.264428 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:38:54.283890 master-0 kubenswrapper[6980]: I0313 12:38:54.283834 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:38:54.303426 master-0 kubenswrapper[6980]: I0313 12:38:54.303259 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:54.318436 master-0 kubenswrapper[6980]: I0313 12:38:54.318353 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:54.323661 master-0 kubenswrapper[6980]: I0313 12:38:54.323614 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:54.323919 master-0 kubenswrapper[6980]: I0313 12:38:54.323898 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:54.344271 master-0 kubenswrapper[6980]: I0313 12:38:54.344213 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:38:54.365113 master-0 kubenswrapper[6980]: I0313 12:38:54.365000 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:38:54.383978 master-0 kubenswrapper[6980]: I0313 12:38:54.383903 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:38:54.404865 master-0 kubenswrapper[6980]: I0313 12:38:54.404828 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:54.426633 master-0 kubenswrapper[6980]: I0313 12:38:54.426408 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:54.442320 master-0 kubenswrapper[6980]: I0313 12:38:54.442265 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:38:54.462302 master-0 kubenswrapper[6980]: I0313 12:38:54.462241 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:54.482764 master-0 kubenswrapper[6980]: E0313 12:38:54.482672 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56" Mar 13 12:38:54.483106 master-0 kubenswrapper[6980]: E0313 12:38:54.483024 6980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:kube-controller-manager-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,Command:[cluster-kube-controller-manager-operator operator],Args:[--config=/var/run/configmaps/config/config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56,ValueFrom:nil,},EnvVar{Name:CLUSTER_POLICY_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609,ValueFrom:nil,},EnvVar{Name:TOOLS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35768a0c3eb24134dd38633e8acfc7db69ee96b2fd660e9bba3b8c996452fef7,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.31.14,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod kube-controller-manager-operator-86d7cdfdfb-nwclt_openshift-kube-controller-manager-operator(0d868028-9984-472a-8403-ffed767e1bf8): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:38:54.483697 master-0 kubenswrapper[6980]: I0313 12:38:54.483662 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:54.484764 master-0 kubenswrapper[6980]: E0313 12:38:54.484692 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" podUID="0d868028-9984-472a-8403-ffed767e1bf8" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.488843 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.488927 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.488960 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.488982 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489000 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489018 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489036 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489060 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489079 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489098 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489124 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489142 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: I0313 12:38:54.489158 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489369 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489428 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489412958 +0000 UTC m=+2.823407584 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489504 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489554 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489545932 +0000 UTC m=+2.823540558 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489646 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489712 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489702707 +0000 UTC m=+2.823697333 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489824 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489876 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489868262 +0000 UTC m=+2.823862888 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489880 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489951 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489928274 +0000 UTC m=+2.823922940 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489952 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489984 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489997 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.489988335 +0000 UTC m=+2.823983031 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490039 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490027837 +0000 UTC m=+2.824022513 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.489833 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490078 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490070648 +0000 UTC m=+2.824065364 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490093 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490135 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490143 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.49013352 +0000 UTC m=+2.824128226 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490050 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490170 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490161021 +0000 UTC m=+2.824155717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:54.489875 master-0 kubenswrapper[6980]: E0313 12:38:54.490204 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490193362 +0000 UTC m=+2.824187988 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:54.491683 master-0 kubenswrapper[6980]: E0313 12:38:54.490217 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:54.491683 master-0 kubenswrapper[6980]: E0313 12:38:54.490246 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490237543 +0000 UTC m=+2.824232239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:54.491683 master-0 kubenswrapper[6980]: E0313 12:38:54.490292 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:54.491683 master-0 kubenswrapper[6980]: E0313 12:38:54.490332 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:55.490321396 +0000 UTC m=+2.824316082 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:54.585125 master-0 kubenswrapper[6980]: I0313 12:38:54.584108 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:54.596907 master-0 kubenswrapper[6980]: I0313 12:38:54.596847 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:54.598148 master-0 kubenswrapper[6980]: I0313 12:38:54.597777 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:54.598148 master-0 kubenswrapper[6980]: I0313 12:38:54.598111 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:38:54.600250 master-0 kubenswrapper[6980]: I0313 12:38:54.600225 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:38:54.602679 master-0 kubenswrapper[6980]: E0313 12:38:54.602645 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:38:54.617199 master-0 kubenswrapper[6980]: E0313 12:38:54.617145 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:54.635971 master-0 kubenswrapper[6980]: E0313 12:38:54.635901 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:54.656293 master-0 kubenswrapper[6980]: W0313 12:38:54.656219 6980 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 12:38:54.656528 master-0 kubenswrapper[6980]: E0313 12:38:54.656327 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:38:54.678293 master-0 kubenswrapper[6980]: E0313 12:38:54.678236 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:38:54.724148 master-0 kubenswrapper[6980]: I0313 12:38:54.724084 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:38:54.743965 master-0 kubenswrapper[6980]: I0313 12:38:54.743901 6980 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:38:54.750325 master-0 kubenswrapper[6980]: I0313 12:38:54.750292 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:55.017454 master-0 kubenswrapper[6980]: I0313 12:38:55.017384 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:55.505910 master-0 kubenswrapper[6980]: I0313 12:38:55.505795 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:55.505910 master-0 kubenswrapper[6980]: I0313 12:38:55.505886 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:55.505910 master-0 kubenswrapper[6980]: I0313 12:38:55.505917 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.505946 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.505978 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506002 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506066 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506080 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506049894 +0000 UTC m=+4.840044520 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.506008 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506097 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506110 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506096196 +0000 UTC m=+4.840090822 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506066 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506141 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506131007 +0000 UTC m=+4.840125633 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.506133 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.506169 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506176 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506164518 +0000 UTC m=+4.840159144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.506199 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: I0313 12:38:55.506230 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506237 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506297 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506300 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506322 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506315152 +0000 UTC m=+4.840309778 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:55.506343 master-0 kubenswrapper[6980]: E0313 12:38:55.506333 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506328113 +0000 UTC m=+4.840322739 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506387 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506348334 +0000 UTC m=+4.840342980 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: I0313 12:38:55.506261 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506394 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506441 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506478 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: I0313 12:38:55.506474 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506477 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506519 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506510 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506480618 +0000 UTC m=+4.840475244 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506630 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506616742 +0000 UTC m=+4.840611448 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: I0313 12:38:55.506656 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506690 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506684194 +0000 UTC m=+4.840678820 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506701 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506696314 +0000 UTC m=+4.840690940 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506711 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506705915 +0000 UTC m=+4.840700541 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506729 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:55.507204 master-0 kubenswrapper[6980]: E0313 12:38:55.506776 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:38:57.506759296 +0000 UTC m=+4.840753922 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:55.693473 master-0 kubenswrapper[6980]: E0313 12:38:55.693369 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460" Mar 13 12:38:55.693715 master-0 kubenswrapper[6980]: E0313 12:38:55.693671 6980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqxjz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-456r5_openshift-network-operator(2b5ab386-14ed-4610-a08a-54b6de877603): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:38:55.695250 master-0 kubenswrapper[6980]: E0313 12:38:55.694880 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-network-operator/iptables-alerter-456r5" podUID="2b5ab386-14ed-4610-a08a-54b6de877603" Mar 13 12:38:56.324511 master-0 kubenswrapper[6980]: E0313 12:38:56.324419 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5" Mar 13 12:38:56.325235 master-0 kubenswrapper[6980]: E0313 12:38:56.324685 6980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:openshift-config-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5,Command:[cluster-config-operator operator --operator-version=$(OPERATOR_IMAGE_VERSION) --authoritative-feature-gate-dir=/available-featuregates],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:available-featuregates,ReadOnly:false,MountPath:/available-featuregates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-992bv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:3,TimeoutSeconds:1,PeriodSeconds:3,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openshift-config-operator-64488f9d78-tml9z_openshift-config-operator(edde8919-104a-4f05-8e21-46787f706bed): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:38:56.326884 master-0 kubenswrapper[6980]: E0313 12:38:56.326662 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" Mar 13 12:38:56.681275 master-0 kubenswrapper[6980]: I0313 12:38:56.681208 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:56.681592 master-0 kubenswrapper[6980]: I0313 12:38:56.681430 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:38:56.686550 master-0 kubenswrapper[6980]: I0313 12:38:56.686515 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 12:38:56.989794 master-0 kubenswrapper[6980]: E0313 12:38:56.989624 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba" Mar 13 12:38:56.989998 master-0 kubenswrapper[6980]: E0313 12:38:56.989862 6980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:service-ca-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,Command:[service-ca-operator operator],Args:[--config=/var/run/configmaps/config/operator-config.yaml -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{83886080 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rspzx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-ca-operator-69b6fc6b88-2d882_openshift-service-ca-operator(603fef71-e0cd-4617-bd8a-a55580578c2f): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:38:56.992892 master-0 kubenswrapper[6980]: E0313 12:38:56.992817 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"service-ca-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" podUID="603fef71-e0cd-4617-bd8a-a55580578c2f" Mar 13 12:38:57.483222 master-0 kubenswrapper[6980]: E0313 12:38:57.482894 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3" Mar 13 12:38:57.483808 master-0 kubenswrapper[6980]: E0313 12:38:57.483386 6980 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-snapshot-controller-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3,Command:[],Args:[start -v=2],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:OPERAND_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1,ValueFrom:nil,},EnvVar{Name:WEBHOOK_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d5e9989ee0577e930adcd97085176343a881bf92537dda1bf0325a3b1faf96d6,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-9n8sb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000160000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-snapshot-controller-operator-5685fbc7d-77b2h_openshift-cluster-storage-operator(a6a45be0-19ef-4d36-b8a7-eb2705d24bfa): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Mar 13 12:38:57.485128 master-0 kubenswrapper[6980]: E0313 12:38:57.485055 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-snapshot-controller-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" podUID="a6a45be0-19ef-4d36-b8a7-eb2705d24bfa" Mar 13 12:38:57.536015 master-0 kubenswrapper[6980]: I0313 12:38:57.535797 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:38:57.536015 master-0 kubenswrapper[6980]: I0313 12:38:57.535961 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:38:57.536015 master-0 kubenswrapper[6980]: I0313 12:38:57.535991 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:38:57.536015 master-0 kubenswrapper[6980]: I0313 12:38:57.536016 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536028 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: I0313 12:38:57.536043 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536115 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536096879 +0000 UTC m=+8.870091505 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536156 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536201 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536248 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: I0313 12:38:57.536249 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536296 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536251 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536233883 +0000 UTC m=+8.870228509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536314 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536342 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536322386 +0000 UTC m=+8.870317012 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536366 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536358737 +0000 UTC m=+8.870353363 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: I0313 12:38:57.536385 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: I0313 12:38:57.536443 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536520 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:38:57.536594 master-0 kubenswrapper[6980]: E0313 12:38:57.536550 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536724 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.5364626 +0000 UTC m=+8.870457226 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536750 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536741069 +0000 UTC m=+8.870735685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: I0313 12:38:57.536776 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: I0313 12:38:57.536803 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: I0313 12:38:57.536827 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: I0313 12:38:57.536852 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: I0313 12:38:57.536880 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536911 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536894754 +0000 UTC m=+8.870889420 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536962 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536978 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.536964416 +0000 UTC m=+8.870959122 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536950 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537010 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.536994 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537049 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537144 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.537009337 +0000 UTC m=+8.871003973 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537168 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.537157362 +0000 UTC m=+8.871152098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537190 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.537183613 +0000 UTC m=+8.871178359 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537253 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.537242454 +0000 UTC m=+8.871237160 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:38:57.538479 master-0 kubenswrapper[6980]: E0313 12:38:57.537309 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:01.537297316 +0000 UTC m=+8.871292032 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:38:58.053232 master-0 kubenswrapper[6980]: E0313 12:38:58.053147 6980 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953" Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: E0313 12:38:58.053507 6980 kuberuntime_manager.go:1274] "Unhandled Error" err=< Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: container &Container{Name:authentication-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953,Command:[/bin/bash -ec],Args:[if [ -s /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt ]; then Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: echo "Copying system trust bundle" Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: cp -f /var/run/configmaps/trusted-ca-bundle/ca-bundle.crt /etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: fi Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: exec authentication-operator operator --config=/var/run/configmaps/config/operator-config.yaml --v=2 --terminate-on-files=/var/run/configmaps/trusted-ca-bundle/ca-bundle.crt --terminate-on-files=/tmp/terminate Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:IMAGE_OAUTH_SERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3d3571ade02a7c61123d62c53fda6a57031a52c058c0571759dc09f96b23978f,ValueFrom:nil,},EnvVar{Name:IMAGE_OAUTH_APISERVER,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:4.18.34,ValueFrom:nil,},EnvVar{Name:OPERAND_OAUTH_SERVER_IMAGE_VERSION,Value:4.18.34_openshift,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:false,MountPath:/var/run/configmaps/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:serving-cert,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:trusted-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/trusted-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:service-ca-bundle,ReadOnly:true,MountPath:/var/run/configmaps/service-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bkjph,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:healthz,Port:{0 8443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:30,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Mar 13 12:38:58.053607 master-0 kubenswrapper[6980]: > logger="UnhandledError" Mar 13 12:38:58.055651 master-0 kubenswrapper[6980]: E0313 12:38:58.055554 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:38:58.075043 master-0 kubenswrapper[6980]: I0313 12:38:58.074618 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:58.120165 master-0 kubenswrapper[6980]: I0313 12:38:58.120077 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:58.478351 master-0 kubenswrapper[6980]: I0313 12:38:58.478285 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:58.484481 master-0 kubenswrapper[6980]: I0313 12:38:58.484446 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:38:58.734661 master-0 kubenswrapper[6980]: I0313 12:38:58.734028 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-jjmb8"] Mar 13 12:38:58.748591 master-0 kubenswrapper[6980]: W0313 12:38:58.748525 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c8b79e_4d29_4ae2_a24f_68595d942442.slice/crio-b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce WatchSource:0}: Error finding container b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce: Status 404 returned error can't find the container with id b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce Mar 13 12:38:59.033231 master-0 kubenswrapper[6980]: I0313 12:38:59.033086 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:38:59.039613 master-0 kubenswrapper[6980]: I0313 12:38:59.038821 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jjmb8" event={"ID":"70c8b79e-4d29-4ae2-a24f-68595d942442","Type":"ContainerStarted","Data":"32eb97f680df0606da76765749aa85a302f4f0dae00dc3844bf996b5ced3dfa3"} Mar 13 12:38:59.039613 master-0 kubenswrapper[6980]: I0313 12:38:59.038855 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-jjmb8" event={"ID":"70c8b79e-4d29-4ae2-a24f-68595d942442","Type":"ContainerStarted","Data":"b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce"} Mar 13 12:38:59.039613 master-0 kubenswrapper[6980]: I0313 12:38:59.038880 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:38:59.043600 master-0 kubenswrapper[6980]: I0313 12:38:59.040568 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" event={"ID":"6e55908e-59f3-45a2-82aa-2616c5a2fd52","Type":"ContainerStarted","Data":"7cea7ef63e0a2bbd7a51a61ea7823a56840343f0d56d2b827f3841e4907fb6b2"} Mar 13 12:38:59.043600 master-0 kubenswrapper[6980]: I0313 12:38:59.042148 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerStarted","Data":"d691dfff8d938f7ef898022014143d56dbbe1b4283d8d74c7b7938096f18aafe"} Mar 13 12:38:59.046428 master-0 kubenswrapper[6980]: I0313 12:38:59.044148 6980 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="a98ec2d4ab9f0fe2fc054a10c78b5e6b8e752b65cb577bb397dd5d71aaf3f3e3" exitCode=0 Mar 13 12:38:59.046428 master-0 kubenswrapper[6980]: I0313 12:38:59.044194 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerDied","Data":"a98ec2d4ab9f0fe2fc054a10c78b5e6b8e752b65cb577bb397dd5d71aaf3f3e3"} Mar 13 12:38:59.046428 master-0 kubenswrapper[6980]: I0313 12:38:59.045880 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerStarted","Data":"d2c23685e01b04fc93d262aa5b6ebee8c573cd64c0296928ae13eaf96f993a18"} Mar 13 12:38:59.049731 master-0 kubenswrapper[6980]: I0313 12:38:59.049406 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" event={"ID":"b2ad4825-17fa-4ddd-b21e-334158f1c048","Type":"ContainerStarted","Data":"a9dd7732800ec2cf2ba2657ee89d490d35d4ed3ca8ea35ffd325cd650a57aa03"} Mar 13 12:38:59.057621 master-0 kubenswrapper[6980]: I0313 12:38:59.055028 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" event={"ID":"1929440f-f2cc-450d-80ff-ded6788baa74","Type":"ContainerStarted","Data":"add6080be63d96ac6d15e6ae92fd130acd330b669019c0708be53e9f316105b4"} Mar 13 12:38:59.057621 master-0 kubenswrapper[6980]: I0313 12:38:59.055265 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:38:59.057621 master-0 kubenswrapper[6980]: I0313 12:38:59.055291 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:38:59.109610 master-0 kubenswrapper[6980]: I0313 12:38:59.107607 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:39:00.065617 master-0 kubenswrapper[6980]: I0313 12:39:00.062749 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:39:00.137266 master-0 kubenswrapper[6980]: I0313 12:39:00.136711 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:00.137266 master-0 kubenswrapper[6980]: I0313 12:39:00.136863 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:39:00.149619 master-0 kubenswrapper[6980]: I0313 12:39:00.147106 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.436666 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828"] Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: E0313 12:39:00.436926 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.436963 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: E0313 12:39:00.436996 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.437005 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.437116 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.437132 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b19a429-6a4f-4f90-9901-417fe8921ccc" containerName="prober" Mar 13 12:39:00.438650 master-0 kubenswrapper[6980]: I0313 12:39:00.437559 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:39:00.443288 master-0 kubenswrapper[6980]: I0313 12:39:00.443080 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:39:00.443288 master-0 kubenswrapper[6980]: I0313 12:39:00.443159 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:39:00.452801 master-0 kubenswrapper[6980]: I0313 12:39:00.452724 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828"] Mar 13 12:39:00.532112 master-0 kubenswrapper[6980]: I0313 12:39:00.532030 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wk9km\" (UniqueName: \"kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km\") pod \"migrator-57ccdf9b5-xt828\" (UID: \"d53c7e46-86e9-4328-9dfd-aec6deef5c01\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:39:00.633492 master-0 kubenswrapper[6980]: I0313 12:39:00.633418 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9km\" (UniqueName: \"kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km\") pod \"migrator-57ccdf9b5-xt828\" (UID: \"d53c7e46-86e9-4328-9dfd-aec6deef5c01\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:39:00.659502 master-0 kubenswrapper[6980]: I0313 12:39:00.659443 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9km\" (UniqueName: \"kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km\") pod \"migrator-57ccdf9b5-xt828\" (UID: \"d53c7e46-86e9-4328-9dfd-aec6deef5c01\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:39:00.778519 master-0 kubenswrapper[6980]: I0313 12:39:00.778365 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:39:01.062060 master-0 kubenswrapper[6980]: I0313 12:39:01.061559 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828"] Mar 13 12:39:01.069172 master-0 kubenswrapper[6980]: I0313 12:39:01.069122 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:39:01.070064 master-0 kubenswrapper[6980]: I0313 12:39:01.069254 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:39:01.562836 master-0 kubenswrapper[6980]: I0313 12:39:01.410418 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-4jg64"] Mar 13 12:39:01.562836 master-0 kubenswrapper[6980]: I0313 12:39:01.411339 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563520 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563612 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563649 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563682 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563711 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563734 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563764 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563814 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563856 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563880 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563903 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563934 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: I0313 12:39:01.563965 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564194 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564316 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.56426921 +0000 UTC m=+16.898263836 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564698 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564739 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564806 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564854 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564872 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564811 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564923 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564928 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564955 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564750 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.564734844 +0000 UTC m=+16.898729470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564974 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564983 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.564973582 +0000 UTC m=+16.898968208 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.564998 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.564992142 +0000 UTC m=+16.898986858 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565014 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565003103 +0000 UTC m=+16.898997729 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565025 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565033 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565023153 +0000 UTC m=+16.899017779 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565069 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565076 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565068725 +0000 UTC m=+16.899063341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565094 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565086185 +0000 UTC m=+16.899080811 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565106 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565100066 +0000 UTC m=+16.899094712 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565119 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565114216 +0000 UTC m=+16.899108842 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565142 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565134527 +0000 UTC m=+16.899129233 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565163 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565157348 +0000 UTC m=+16.899151974 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:01.565419 master-0 kubenswrapper[6980]: E0313 12:39:01.565173 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:09.565168598 +0000 UTC m=+16.899163224 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:39:01.569161 master-0 kubenswrapper[6980]: I0313 12:39:01.569114 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:01.569843 master-0 kubenswrapper[6980]: I0313 12:39:01.569808 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:39:01.569910 master-0 kubenswrapper[6980]: I0313 12:39:01.569866 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:39:01.569955 master-0 kubenswrapper[6980]: I0313 12:39:01.569922 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:01.570050 master-0 kubenswrapper[6980]: I0313 12:39:01.570015 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:39:01.570260 master-0 kubenswrapper[6980]: I0313 12:39:01.570228 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:39:01.576278 master-0 kubenswrapper[6980]: I0313 12:39:01.575615 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-4jg64"] Mar 13 12:39:01.595606 master-0 kubenswrapper[6980]: I0313 12:39:01.595535 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm"] Mar 13 12:39:01.605745 master-0 kubenswrapper[6980]: I0313 12:39:01.602998 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.609327 master-0 kubenswrapper[6980]: I0313 12:39:01.609261 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:01.609327 master-0 kubenswrapper[6980]: I0313 12:39:01.609287 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:39:01.609327 master-0 kubenswrapper[6980]: I0313 12:39:01.609309 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:39:01.609614 master-0 kubenswrapper[6980]: I0313 12:39:01.609282 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:39:01.610478 master-0 kubenswrapper[6980]: I0313 12:39:01.610446 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm"] Mar 13 12:39:01.616387 master-0 kubenswrapper[6980]: I0313 12:39:01.614012 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:01.665417 master-0 kubenswrapper[6980]: I0313 12:39:01.665339 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.665417 master-0 kubenswrapper[6980]: I0313 12:39:01.665419 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkbtj\" (UniqueName: \"kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.665814 master-0 kubenswrapper[6980]: I0313 12:39:01.665485 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.665814 master-0 kubenswrapper[6980]: I0313 12:39:01.665605 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.665814 master-0 kubenswrapper[6980]: I0313 12:39:01.665725 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766768 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766837 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkbtj\" (UniqueName: \"kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766895 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766917 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766936 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94sm\" (UniqueName: \"kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.766980 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.767033 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.767093 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: I0313 12:39:01.767115 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: E0313 12:39:01.767297 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 13 12:39:01.768639 master-0 kubenswrapper[6980]: E0313 12:39:01.767373 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.267353382 +0000 UTC m=+9.601348008 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "config" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768869 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768906 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768951 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.2689284 +0000 UTC m=+9.602923026 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "openshift-global-ca" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768868 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768984 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.268976872 +0000 UTC m=+9.602971498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "client-ca" not found Mar 13 12:39:01.769439 master-0 kubenswrapper[6980]: E0313 12:39:01.768996 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.268991242 +0000 UTC m=+9.602985868 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : secret "serving-cert" not found Mar 13 12:39:01.801259 master-0 kubenswrapper[6980]: I0313 12:39:01.801194 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkbtj\" (UniqueName: \"kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:01.851038 master-0 kubenswrapper[6980]: I0313 12:39:01.850893 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:39:01.856849 master-0 kubenswrapper[6980]: I0313 12:39:01.856796 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:01.863674 master-0 kubenswrapper[6980]: I0313 12:39:01.863623 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:01.867439 master-0 kubenswrapper[6980]: I0313 12:39:01.867403 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.867599 master-0 kubenswrapper[6980]: I0313 12:39:01.867447 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.867599 master-0 kubenswrapper[6980]: I0313 12:39:01.867470 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94sm\" (UniqueName: \"kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.867599 master-0 kubenswrapper[6980]: E0313 12:39:01.867592 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: E0313 12:39:01.867643 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.367629164 +0000 UTC m=+9.701623790 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : configmap "config" not found Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: I0313 12:39:01.867691 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: E0313 12:39:01.867898 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: E0313 12:39:01.867955 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: E0313 12:39:01.868035 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.367974985 +0000 UTC m=+9.701969611 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : configmap "client-ca" not found Mar 13 12:39:01.868225 master-0 kubenswrapper[6980]: E0313 12:39:01.868060 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:02.368050677 +0000 UTC m=+9.702045293 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : secret "serving-cert" not found Mar 13 12:39:01.877631 master-0 kubenswrapper[6980]: I0313 12:39:01.877546 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:39:01.898767 master-0 kubenswrapper[6980]: I0313 12:39:01.898697 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94sm\" (UniqueName: \"kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:02.075100 master-0 kubenswrapper[6980]: I0313 12:39:02.074996 6980 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:39:02.318935 master-0 kubenswrapper[6980]: I0313 12:39:02.318837 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:02.319239 master-0 kubenswrapper[6980]: I0313 12:39:02.318966 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:02.319239 master-0 kubenswrapper[6980]: I0313 12:39:02.319053 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:02.319239 master-0 kubenswrapper[6980]: I0313 12:39:02.319136 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:02.319528 master-0 kubenswrapper[6980]: E0313 12:39:02.319260 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:02.319528 master-0 kubenswrapper[6980]: E0313 12:39:02.319326 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.319306131 +0000 UTC m=+10.653300757 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "client-ca" not found Mar 13 12:39:02.319703 master-0 kubenswrapper[6980]: E0313 12:39:02.319658 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:02.319805 master-0 kubenswrapper[6980]: E0313 12:39:02.319767 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.319741095 +0000 UTC m=+10.653735791 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : secret "serving-cert" not found Mar 13 12:39:02.319857 master-0 kubenswrapper[6980]: E0313 12:39:02.319822 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 13 12:39:02.319898 master-0 kubenswrapper[6980]: E0313 12:39:02.319861 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.319851548 +0000 UTC m=+10.653846244 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "config" not found Mar 13 12:39:02.319898 master-0 kubenswrapper[6980]: E0313 12:39:02.319869 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 13 12:39:02.319969 master-0 kubenswrapper[6980]: E0313 12:39:02.319913 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.31990171 +0000 UTC m=+10.653896386 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "openshift-global-ca" not found Mar 13 12:39:02.419954 master-0 kubenswrapper[6980]: I0313 12:39:02.419855 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:02.420167 master-0 kubenswrapper[6980]: E0313 12:39:02.420017 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: configmap "config" not found Mar 13 12:39:02.420167 master-0 kubenswrapper[6980]: E0313 12:39:02.420116 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.420094279 +0000 UTC m=+10.754088905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : configmap "config" not found Mar 13 12:39:02.420564 master-0 kubenswrapper[6980]: I0313 12:39:02.420533 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:02.420749 master-0 kubenswrapper[6980]: I0313 12:39:02.420712 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:02.420853 master-0 kubenswrapper[6980]: E0313 12:39:02.420834 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:02.420992 master-0 kubenswrapper[6980]: E0313 12:39:02.420879 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.420867443 +0000 UTC m=+10.754862159 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : secret "serving-cert" not found Mar 13 12:39:02.420992 master-0 kubenswrapper[6980]: E0313 12:39:02.420934 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:02.420992 master-0 kubenswrapper[6980]: E0313 12:39:02.420965 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:03.420955526 +0000 UTC m=+10.754950152 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : configmap "client-ca" not found Mar 13 12:39:02.689120 master-0 kubenswrapper[6980]: I0313 12:39:02.688920 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:02.695801 master-0 kubenswrapper[6980]: I0313 12:39:02.695739 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:39:02.960370 master-0 kubenswrapper[6980]: I0313 12:39:02.960060 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-4jg64"] Mar 13 12:39:02.960875 master-0 kubenswrapper[6980]: E0313 12:39:02.960626 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" podUID="78c43844-98df-4837-a7e9-9fcf7b31b099" Mar 13 12:39:03.109620 master-0 kubenswrapper[6980]: I0313 12:39:03.109540 6980 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="2f4b310ff7db85ab3ef583a7bbcbdfb4805f7468c4fa6d6fc7e8d6fd0d181697" exitCode=0 Mar 13 12:39:03.110401 master-0 kubenswrapper[6980]: I0313 12:39:03.109644 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerDied","Data":"2f4b310ff7db85ab3ef583a7bbcbdfb4805f7468c4fa6d6fc7e8d6fd0d181697"} Mar 13 12:39:03.112772 master-0 kubenswrapper[6980]: I0313 12:39:03.112734 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.113011 master-0 kubenswrapper[6980]: I0313 12:39:03.112963 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" event={"ID":"d53c7e46-86e9-4328-9dfd-aec6deef5c01","Type":"ContainerStarted","Data":"d7a36fdd0b153d8fdb4540b3fcd458052672e0226aedc009e1ca191a106ed499"} Mar 13 12:39:03.140425 master-0 kubenswrapper[6980]: I0313 12:39:03.140377 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.172605 master-0 kubenswrapper[6980]: I0313 12:39:03.171759 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm"] Mar 13 12:39:03.172605 master-0 kubenswrapper[6980]: E0313 12:39:03.172110 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" podUID="f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f" Mar 13 12:39:03.306587 master-0 kubenswrapper[6980]: I0313 12:39:03.306293 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkbtj\" (UniqueName: \"kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj\") pod \"78c43844-98df-4837-a7e9-9fcf7b31b099\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " Mar 13 12:39:03.311181 master-0 kubenswrapper[6980]: I0313 12:39:03.311114 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj" (OuterVolumeSpecName: "kube-api-access-wkbtj") pod "78c43844-98df-4837-a7e9-9fcf7b31b099" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099"). InnerVolumeSpecName "kube-api-access-wkbtj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:03.407365 master-0 kubenswrapper[6980]: I0313 12:39:03.407277 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.407365 master-0 kubenswrapper[6980]: I0313 12:39:03.407348 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.407667 master-0 kubenswrapper[6980]: E0313 12:39:03.407451 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:03.407667 master-0 kubenswrapper[6980]: E0313 12:39:03.407520 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.407501621 +0000 UTC m=+12.741496247 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : configmap "client-ca" not found Mar 13 12:39:03.407825 master-0 kubenswrapper[6980]: I0313 12:39:03.407768 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.407918 master-0 kubenswrapper[6980]: I0313 12:39:03.407898 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.407961 master-0 kubenswrapper[6980]: I0313 12:39:03.407945 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkbtj\" (UniqueName: \"kubernetes.io/projected/78c43844-98df-4837-a7e9-9fcf7b31b099-kube-api-access-wkbtj\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:03.408091 master-0 kubenswrapper[6980]: E0313 12:39:03.408064 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:03.408145 master-0 kubenswrapper[6980]: E0313 12:39:03.408124 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert podName:78c43844-98df-4837-a7e9-9fcf7b31b099 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.40810765 +0000 UTC m=+12.742102276 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert") pod "controller-manager-6f7fd6c796-4jg64" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099") : secret "serving-cert" not found Mar 13 12:39:03.408719 master-0 kubenswrapper[6980]: I0313 12:39:03.408678 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.408778 master-0 kubenswrapper[6980]: I0313 12:39:03.408756 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-4jg64\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:03.508618 master-0 kubenswrapper[6980]: I0313 12:39:03.508545 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") pod \"78c43844-98df-4837-a7e9-9fcf7b31b099\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " Mar 13 12:39:03.508861 master-0 kubenswrapper[6980]: I0313 12:39:03.508682 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") pod \"78c43844-98df-4837-a7e9-9fcf7b31b099\" (UID: \"78c43844-98df-4837-a7e9-9fcf7b31b099\") " Mar 13 12:39:03.508861 master-0 kubenswrapper[6980]: I0313 12:39:03.508834 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:03.508991 master-0 kubenswrapper[6980]: I0313 12:39:03.508960 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:03.509046 master-0 kubenswrapper[6980]: I0313 12:39:03.508997 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:03.509193 master-0 kubenswrapper[6980]: E0313 12:39:03.509162 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:03.509254 master-0 kubenswrapper[6980]: E0313 12:39:03.509233 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.509214206 +0000 UTC m=+12.843208832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : configmap "client-ca" not found Mar 13 12:39:03.509645 master-0 kubenswrapper[6980]: E0313 12:39:03.509588 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:03.509723 master-0 kubenswrapper[6980]: E0313 12:39:03.509709 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert podName:f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.5096706 +0000 UTC m=+12.843665246 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert") pod "route-controller-manager-58959cd4d6-n7qbm" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f") : secret "serving-cert" not found Mar 13 12:39:03.510007 master-0 kubenswrapper[6980]: I0313 12:39:03.509971 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "78c43844-98df-4837-a7e9-9fcf7b31b099" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:03.510518 master-0 kubenswrapper[6980]: I0313 12:39:03.510469 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"route-controller-manager-58959cd4d6-n7qbm\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:03.510730 master-0 kubenswrapper[6980]: I0313 12:39:03.510695 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config" (OuterVolumeSpecName: "config") pod "78c43844-98df-4837-a7e9-9fcf7b31b099" (UID: "78c43844-98df-4837-a7e9-9fcf7b31b099"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:03.610327 master-0 kubenswrapper[6980]: I0313 12:39:03.610179 6980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:03.610327 master-0 kubenswrapper[6980]: I0313 12:39:03.610226 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.118455 master-0 kubenswrapper[6980]: I0313 12:39:04.118366 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" event={"ID":"d53c7e46-86e9-4328-9dfd-aec6deef5c01","Type":"ContainerStarted","Data":"9ea0029ac4b50e0c60fbb9d63522a12c1e74bc4147c49445748325388ca3d521"} Mar 13 12:39:04.119415 master-0 kubenswrapper[6980]: I0313 12:39:04.118435 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:04.119526 master-0 kubenswrapper[6980]: I0313 12:39:04.118481 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-4jg64" Mar 13 12:39:04.119606 master-0 kubenswrapper[6980]: I0313 12:39:04.119387 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" event={"ID":"d53c7e46-86e9-4328-9dfd-aec6deef5c01","Type":"ContainerStarted","Data":"1b68451adedb87a24ab76fe4d3d6e91af90d6daa92593c72044b5a02d932c30f"} Mar 13 12:39:04.129171 master-0 kubenswrapper[6980]: I0313 12:39:04.129130 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:04.136256 master-0 kubenswrapper[6980]: I0313 12:39:04.136161 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" podStartSLOduration=2.608155261 podStartE2EDuration="4.136123073s" podCreationTimestamp="2026-03-13 12:39:00 +0000 UTC" firstStartedPulling="2026-03-13 12:39:02.099632672 +0000 UTC m=+9.433627298" lastFinishedPulling="2026-03-13 12:39:03.627600484 +0000 UTC m=+10.961595110" observedRunningTime="2026-03-13 12:39:04.135380311 +0000 UTC m=+11.469374967" watchObservedRunningTime="2026-03-13 12:39:04.136123073 +0000 UTC m=+11.470117699" Mar 13 12:39:04.170348 master-0 kubenswrapper[6980]: I0313 12:39:04.170275 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-78f7459566-vfms4"] Mar 13 12:39:04.170952 master-0 kubenswrapper[6980]: I0313 12:39:04.170912 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.175281 master-0 kubenswrapper[6980]: I0313 12:39:04.175212 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-4jg64"] Mar 13 12:39:04.180991 master-0 kubenswrapper[6980]: I0313 12:39:04.180325 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-78f7459566-vfms4"] Mar 13 12:39:04.182082 master-0 kubenswrapper[6980]: I0313 12:39:04.182046 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-4jg64"] Mar 13 12:39:04.182241 master-0 kubenswrapper[6980]: I0313 12:39:04.182083 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:39:04.182241 master-0 kubenswrapper[6980]: I0313 12:39:04.182211 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:39:04.182376 master-0 kubenswrapper[6980]: I0313 12:39:04.182353 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:04.182657 master-0 kubenswrapper[6980]: I0313 12:39:04.182516 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:39:04.182934 master-0 kubenswrapper[6980]: I0313 12:39:04.182898 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:04.186950 master-0 kubenswrapper[6980]: I0313 12:39:04.186678 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:39:04.225032 master-0 kubenswrapper[6980]: I0313 12:39:04.224984 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78c43844-98df-4837-a7e9-9fcf7b31b099-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.225032 master-0 kubenswrapper[6980]: I0313 12:39:04.225016 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78c43844-98df-4837-a7e9-9fcf7b31b099-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.327090 master-0 kubenswrapper[6980]: I0313 12:39:04.327013 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r94sm\" (UniqueName: \"kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm\") pod \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " Mar 13 12:39:04.327349 master-0 kubenswrapper[6980]: I0313 12:39:04.327155 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") pod \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\" (UID: \"f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f\") " Mar 13 12:39:04.327349 master-0 kubenswrapper[6980]: I0313 12:39:04.327314 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.327435 master-0 kubenswrapper[6980]: I0313 12:39:04.327406 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.327470 master-0 kubenswrapper[6980]: I0313 12:39:04.327459 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.327547 master-0 kubenswrapper[6980]: I0313 12:39:04.327485 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.327637 master-0 kubenswrapper[6980]: I0313 12:39:04.327570 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.328520 master-0 kubenswrapper[6980]: I0313 12:39:04.328465 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config" (OuterVolumeSpecName: "config") pod "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:04.333198 master-0 kubenswrapper[6980]: I0313 12:39:04.331890 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm" (OuterVolumeSpecName: "kube-api-access-r94sm") pod "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f" (UID: "f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f"). InnerVolumeSpecName "kube-api-access-r94sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:04.428305 master-0 kubenswrapper[6980]: I0313 12:39:04.428200 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.428305 master-0 kubenswrapper[6980]: I0313 12:39:04.428297 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.428724 master-0 kubenswrapper[6980]: I0313 12:39:04.428363 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.428900 master-0 kubenswrapper[6980]: I0313 12:39:04.428826 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.429000 master-0 kubenswrapper[6980]: E0313 12:39:04.428938 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:04.429062 master-0 kubenswrapper[6980]: E0313 12:39:04.429001 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:04.928982926 +0000 UTC m=+12.262977552 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : configmap "client-ca" not found Mar 13 12:39:04.429671 master-0 kubenswrapper[6980]: I0313 12:39:04.429205 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.429671 master-0 kubenswrapper[6980]: I0313 12:39:04.429447 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r94sm\" (UniqueName: \"kubernetes.io/projected/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-kube-api-access-r94sm\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.429671 master-0 kubenswrapper[6980]: I0313 12:39:04.429476 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:04.429671 master-0 kubenswrapper[6980]: E0313 12:39:04.429557 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:04.429671 master-0 kubenswrapper[6980]: E0313 12:39:04.429636 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:04.929617176 +0000 UTC m=+12.263611802 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : secret "serving-cert" not found Mar 13 12:39:04.430071 master-0 kubenswrapper[6980]: I0313 12:39:04.430012 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.430297 master-0 kubenswrapper[6980]: I0313 12:39:04.430255 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.449685 master-0 kubenswrapper[6980]: I0313 12:39:04.449622 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.725274 master-0 kubenswrapper[6980]: I0313 12:39:04.724795 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78f7459566-vfms4"] Mar 13 12:39:04.725658 master-0 kubenswrapper[6980]: E0313 12:39:04.725058 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" podUID="700f65cc-83cf-4463-b548-1b73749278db" Mar 13 12:39:04.863423 master-0 kubenswrapper[6980]: I0313 12:39:04.863351 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78c43844-98df-4837-a7e9-9fcf7b31b099" path="/var/lib/kubelet/pods/78c43844-98df-4837-a7e9-9fcf7b31b099/volumes" Mar 13 12:39:04.953665 master-0 kubenswrapper[6980]: I0313 12:39:04.953169 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.953665 master-0 kubenswrapper[6980]: I0313 12:39:04.953514 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:04.953665 master-0 kubenswrapper[6980]: E0313 12:39:04.953309 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:04.953912 master-0 kubenswrapper[6980]: E0313 12:39:04.953701 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.95365805 +0000 UTC m=+13.287652716 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : configmap "client-ca" not found Mar 13 12:39:04.953912 master-0 kubenswrapper[6980]: E0313 12:39:04.953756 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:04.953912 master-0 kubenswrapper[6980]: E0313 12:39:04.953824 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:05.953802664 +0000 UTC m=+13.287797340 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : secret "serving-cert" not found Mar 13 12:39:05.125829 master-0 kubenswrapper[6980]: I0313 12:39:05.125219 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:05.125829 master-0 kubenswrapper[6980]: I0313 12:39:05.125267 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm" Mar 13 12:39:05.135809 master-0 kubenswrapper[6980]: I0313 12:39:05.135629 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:05.156269 master-0 kubenswrapper[6980]: I0313 12:39:05.156158 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config\") pod \"700f65cc-83cf-4463-b548-1b73749278db\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " Mar 13 12:39:05.156269 master-0 kubenswrapper[6980]: I0313 12:39:05.156199 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76\") pod \"700f65cc-83cf-4463-b548-1b73749278db\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " Mar 13 12:39:05.156269 master-0 kubenswrapper[6980]: I0313 12:39:05.156242 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles\") pod \"700f65cc-83cf-4463-b548-1b73749278db\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " Mar 13 12:39:05.157710 master-0 kubenswrapper[6980]: I0313 12:39:05.157661 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config" (OuterVolumeSpecName: "config") pod "700f65cc-83cf-4463-b548-1b73749278db" (UID: "700f65cc-83cf-4463-b548-1b73749278db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:05.157875 master-0 kubenswrapper[6980]: I0313 12:39:05.157844 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "700f65cc-83cf-4463-b548-1b73749278db" (UID: "700f65cc-83cf-4463-b548-1b73749278db"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:05.162498 master-0 kubenswrapper[6980]: I0313 12:39:05.162440 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm"] Mar 13 12:39:05.163212 master-0 kubenswrapper[6980]: I0313 12:39:05.163156 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-58959cd4d6-n7qbm"] Mar 13 12:39:05.165517 master-0 kubenswrapper[6980]: I0313 12:39:05.165454 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76" (OuterVolumeSpecName: "kube-api-access-xvn76") pod "700f65cc-83cf-4463-b548-1b73749278db" (UID: "700f65cc-83cf-4463-b548-1b73749278db"). InnerVolumeSpecName "kube-api-access-xvn76". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:05.258651 master-0 kubenswrapper[6980]: I0313 12:39:05.258103 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:05.258651 master-0 kubenswrapper[6980]: I0313 12:39:05.258153 6980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:05.258651 master-0 kubenswrapper[6980]: I0313 12:39:05.258169 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:05.258651 master-0 kubenswrapper[6980]: I0313 12:39:05.258183 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xvn76\" (UniqueName: \"kubernetes.io/projected/700f65cc-83cf-4463-b548-1b73749278db-kube-api-access-xvn76\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:05.258651 master-0 kubenswrapper[6980]: I0313 12:39:05.258197 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:05.964515 master-0 kubenswrapper[6980]: I0313 12:39:05.964411 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:05.964515 master-0 kubenswrapper[6980]: I0313 12:39:05.964487 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert\") pod \"controller-manager-78f7459566-vfms4\" (UID: \"700f65cc-83cf-4463-b548-1b73749278db\") " pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:05.964874 master-0 kubenswrapper[6980]: E0313 12:39:05.964653 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:05.964874 master-0 kubenswrapper[6980]: E0313 12:39:05.964750 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.964725446 +0000 UTC m=+15.298720122 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : configmap "client-ca" not found Mar 13 12:39:05.965097 master-0 kubenswrapper[6980]: E0313 12:39:05.965030 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:05.965160 master-0 kubenswrapper[6980]: E0313 12:39:05.965137 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert podName:700f65cc-83cf-4463-b548-1b73749278db nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.965119968 +0000 UTC m=+15.299114594 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert") pod "controller-manager-78f7459566-vfms4" (UID: "700f65cc-83cf-4463-b548-1b73749278db") : secret "serving-cert" not found Mar 13 12:39:06.132742 master-0 kubenswrapper[6980]: I0313 12:39:06.132648 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-78f7459566-vfms4" Mar 13 12:39:06.133788 master-0 kubenswrapper[6980]: I0313 12:39:06.132623 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerStarted","Data":"a96046bbc6e2f7a9efce1073fbf280ed5ef6a4fec79a22f6b7f77fdfe7b84349"} Mar 13 12:39:06.191990 master-0 kubenswrapper[6980]: I0313 12:39:06.191561 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-78f7459566-vfms4"] Mar 13 12:39:06.196948 master-0 kubenswrapper[6980]: I0313 12:39:06.195977 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-78f7459566-vfms4"] Mar 13 12:39:06.268468 master-0 kubenswrapper[6980]: I0313 12:39:06.268310 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/700f65cc-83cf-4463-b548-1b73749278db-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:06.268468 master-0 kubenswrapper[6980]: I0313 12:39:06.268345 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/700f65cc-83cf-4463-b548-1b73749278db-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:06.370416 master-0 kubenswrapper[6980]: I0313 12:39:06.370360 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7"] Mar 13 12:39:06.377505 master-0 kubenswrapper[6980]: I0313 12:39:06.375948 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.379163 master-0 kubenswrapper[6980]: I0313 12:39:06.378700 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:39:06.379163 master-0 kubenswrapper[6980]: I0313 12:39:06.378980 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:06.379247 master-0 kubenswrapper[6980]: I0313 12:39:06.379215 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:39:06.380946 master-0 kubenswrapper[6980]: I0313 12:39:06.379329 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:06.380946 master-0 kubenswrapper[6980]: I0313 12:39:06.379389 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:39:06.384427 master-0 kubenswrapper[6980]: I0313 12:39:06.384382 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-669d874ccc-8rrvh"] Mar 13 12:39:06.385009 master-0 kubenswrapper[6980]: I0313 12:39:06.384984 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.388923 master-0 kubenswrapper[6980]: I0313 12:39:06.388885 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:06.391069 master-0 kubenswrapper[6980]: I0313 12:39:06.389031 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:39:06.391069 master-0 kubenswrapper[6980]: I0313 12:39:06.389200 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:39:06.391069 master-0 kubenswrapper[6980]: I0313 12:39:06.389672 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:06.391310 master-0 kubenswrapper[6980]: I0313 12:39:06.391268 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:39:06.394194 master-0 kubenswrapper[6980]: I0313 12:39:06.394025 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-669d874ccc-8rrvh"] Mar 13 12:39:06.398389 master-0 kubenswrapper[6980]: I0313 12:39:06.396937 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7"] Mar 13 12:39:06.398389 master-0 kubenswrapper[6980]: I0313 12:39:06.397994 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:39:06.470621 master-0 kubenswrapper[6980]: I0313 12:39:06.469903 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqmnl\" (UniqueName: \"kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.470621 master-0 kubenswrapper[6980]: I0313 12:39:06.469986 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.470621 master-0 kubenswrapper[6980]: I0313 12:39:06.470047 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.470621 master-0 kubenswrapper[6980]: I0313 12:39:06.470079 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.571469 master-0 kubenswrapper[6980]: I0313 12:39:06.571105 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.571469 master-0 kubenswrapper[6980]: I0313 12:39:06.571232 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.719476 master-0 kubenswrapper[6980]: I0313 12:39:06.718867 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.719476 master-0 kubenswrapper[6980]: I0313 12:39:06.719414 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.719476 master-0 kubenswrapper[6980]: I0313 12:39:06.719468 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.719704 master-0 kubenswrapper[6980]: I0313 12:39:06.719613 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.719704 master-0 kubenswrapper[6980]: I0313 12:39:06.719648 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.719704 master-0 kubenswrapper[6980]: I0313 12:39:06.719678 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpb4l\" (UniqueName: \"kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.719877 master-0 kubenswrapper[6980]: I0313 12:39:06.719719 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqmnl\" (UniqueName: \"kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.721785 master-0 kubenswrapper[6980]: E0313 12:39:06.720036 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:06.721785 master-0 kubenswrapper[6980]: E0313 12:39:06.720123 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.220098099 +0000 UTC m=+14.554092815 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:06.721785 master-0 kubenswrapper[6980]: I0313 12:39:06.721359 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.726862 master-0 kubenswrapper[6980]: E0313 12:39:06.726656 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:06.726862 master-0 kubenswrapper[6980]: E0313 12:39:06.726769 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.226744832 +0000 UTC m=+14.560739468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:06.754664 master-0 kubenswrapper[6980]: I0313 12:39:06.753569 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqmnl\" (UniqueName: \"kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:06.820202 master-0 kubenswrapper[6980]: I0313 12:39:06.820116 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.820202 master-0 kubenswrapper[6980]: I0313 12:39:06.820165 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.820202 master-0 kubenswrapper[6980]: I0313 12:39:06.820186 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fpb4l\" (UniqueName: \"kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.820539 master-0 kubenswrapper[6980]: I0313 12:39:06.820246 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.820539 master-0 kubenswrapper[6980]: E0313 12:39:06.820367 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:06.820539 master-0 kubenswrapper[6980]: I0313 12:39:06.820414 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.820539 master-0 kubenswrapper[6980]: E0313 12:39:06.820471 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.320439203 +0000 UTC m=+14.654433829 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:06.821629 master-0 kubenswrapper[6980]: I0313 12:39:06.821504 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.821629 master-0 kubenswrapper[6980]: E0313 12:39:06.820388 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:06.822069 master-0 kubenswrapper[6980]: I0313 12:39:06.822015 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.822233 master-0 kubenswrapper[6980]: E0313 12:39:06.822199 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:07.322185837 +0000 UTC m=+14.656180543 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:06.842411 master-0 kubenswrapper[6980]: I0313 12:39:06.842345 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpb4l\" (UniqueName: \"kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:06.864462 master-0 kubenswrapper[6980]: I0313 12:39:06.864428 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="700f65cc-83cf-4463-b548-1b73749278db" path="/var/lib/kubelet/pods/700f65cc-83cf-4463-b548-1b73749278db/volumes" Mar 13 12:39:06.864900 master-0 kubenswrapper[6980]: I0313 12:39:06.864818 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f" path="/var/lib/kubelet/pods/f6ef99e1-d61e-46a4-8fcc-3ec9dc6a533f/volumes" Mar 13 12:39:07.226056 master-0 kubenswrapper[6980]: I0313 12:39:07.226012 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:07.226802 master-0 kubenswrapper[6980]: E0313 12:39:07.226251 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:07.226802 master-0 kubenswrapper[6980]: E0313 12:39:07.226504 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:08.226481613 +0000 UTC m=+15.560476239 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:07.327680 master-0 kubenswrapper[6980]: I0313 12:39:07.327626 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:07.327916 master-0 kubenswrapper[6980]: E0313 12:39:07.327869 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:07.328000 master-0 kubenswrapper[6980]: E0313 12:39:07.327979 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:08.327954191 +0000 UTC m=+15.661948907 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:07.328736 master-0 kubenswrapper[6980]: I0313 12:39:07.328701 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:07.328812 master-0 kubenswrapper[6980]: I0313 12:39:07.328744 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:07.329000 master-0 kubenswrapper[6980]: E0313 12:39:07.328928 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:07.329000 master-0 kubenswrapper[6980]: E0313 12:39:07.328979 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:08.328966902 +0000 UTC m=+15.662961588 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:07.329295 master-0 kubenswrapper[6980]: E0313 12:39:07.329024 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:07.329295 master-0 kubenswrapper[6980]: E0313 12:39:07.329049 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:08.329041395 +0000 UTC m=+15.663036021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:08.238892 master-0 kubenswrapper[6980]: I0313 12:39:08.237188 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81"} Mar 13 12:39:08.238892 master-0 kubenswrapper[6980]: I0313 12:39:08.237458 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:39:08.317748 master-0 kubenswrapper[6980]: I0313 12:39:08.317688 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:08.318011 master-0 kubenswrapper[6980]: E0313 12:39:08.317972 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:08.318063 master-0 kubenswrapper[6980]: E0313 12:39:08.318038 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:10.318018774 +0000 UTC m=+17.652013400 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:08.419183 master-0 kubenswrapper[6980]: I0313 12:39:08.419070 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:08.419183 master-0 kubenswrapper[6980]: I0313 12:39:08.419162 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: I0313 12:39:08.419255 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419498 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419569 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:10.419550124 +0000 UTC m=+17.753544740 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419826 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419880 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:10.419866884 +0000 UTC m=+17.753861510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419954 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:08.420811 master-0 kubenswrapper[6980]: E0313 12:39:08.419995 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:10.419984958 +0000 UTC m=+17.753979574 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:09.631836 master-0 kubenswrapper[6980]: I0313 12:39:09.631470 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.631843 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.631874 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.631908 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.631943 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.631846 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632018 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632040 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.63202067 +0000 UTC m=+32.966015296 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632061 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632052021 +0000 UTC m=+32.966046657 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632093 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632113 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632126 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632115813 +0000 UTC m=+32.966110439 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632156 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632203 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632237 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632282 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: I0313 12:39:09.632314 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:09.632334 master-0 kubenswrapper[6980]: E0313 12:39:09.632159 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: I0313 12:39:09.632348 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632363 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632353341 +0000 UTC m=+32.966347967 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: I0313 12:39:09.632383 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632411 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632445 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632435853 +0000 UTC m=+32.966430479 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.631949 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632492 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632481995 +0000 UTC m=+32.966476621 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632494 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632520 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632219 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632328 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632279 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632611 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632524 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632514916 +0000 UTC m=+32.966509542 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632656 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.63264587 +0000 UTC m=+32.966640496 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632671 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.63266483 +0000 UTC m=+32.966659456 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632683 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632676801 +0000 UTC m=+32.966671427 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632696 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632690361 +0000 UTC m=+32.966684987 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632708 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632702152 +0000 UTC m=+32.966696778 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632735 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:09.632957 master-0 kubenswrapper[6980]: E0313 12:39:09.632779 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.632768304 +0000 UTC m=+32.966762930 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:39:10.479999 master-0 kubenswrapper[6980]: I0313 12:39:10.479934 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:10.480256 master-0 kubenswrapper[6980]: I0313 12:39:10.480081 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:10.480256 master-0 kubenswrapper[6980]: I0313 12:39:10.480141 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:10.480256 master-0 kubenswrapper[6980]: I0313 12:39:10.480211 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:10.480454 master-0 kubenswrapper[6980]: E0313 12:39:10.480428 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:10.480569 master-0 kubenswrapper[6980]: E0313 12:39:10.480548 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:14.480526076 +0000 UTC m=+21.814520702 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:10.480676 master-0 kubenswrapper[6980]: E0313 12:39:10.480650 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:10.480719 master-0 kubenswrapper[6980]: E0313 12:39:10.480685 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:14.480675231 +0000 UTC m=+21.814669857 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:10.480764 master-0 kubenswrapper[6980]: E0313 12:39:10.480721 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:10.480764 master-0 kubenswrapper[6980]: E0313 12:39:10.480745 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:14.480737203 +0000 UTC m=+21.814731829 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:10.480821 master-0 kubenswrapper[6980]: E0313 12:39:10.480812 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:10.480852 master-0 kubenswrapper[6980]: E0313 12:39:10.480835 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:14.480828626 +0000 UTC m=+21.814823252 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:10.490291 master-0 kubenswrapper[6980]: I0313 12:39:10.490227 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" event={"ID":"603fef71-e0cd-4617-bd8a-a55580578c2f","Type":"ContainerStarted","Data":"a593c0e3cdcdc60e311759e5407d46a2222b3d9d443d63f109618c4b09858401"} Mar 13 12:39:10.506527 master-0 kubenswrapper[6980]: I0313 12:39:10.506473 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" event={"ID":"0d868028-9984-472a-8403-ffed767e1bf8","Type":"ContainerStarted","Data":"8d3d7c80d1f091cb6801c4897cba8089f08217db69ec67d4a437f0167c034ba9"} Mar 13 12:39:11.948683 master-0 kubenswrapper[6980]: I0313 12:39:11.940957 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:39:12.237685 master-0 kubenswrapper[6980]: I0313 12:39:12.237196 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-5b44769c65-4nsbw"] Mar 13 12:39:12.239535 master-0 kubenswrapper[6980]: I0313 12:39:12.239285 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.243187 master-0 kubenswrapper[6980]: I0313 12:39:12.242902 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:39:12.243371 master-0 kubenswrapper[6980]: I0313 12:39:12.243346 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:39:12.243455 master-0 kubenswrapper[6980]: I0313 12:39:12.243403 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:39:12.243612 master-0 kubenswrapper[6980]: I0313 12:39:12.243531 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:39:12.245003 master-0 kubenswrapper[6980]: I0313 12:39:12.244893 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:39:12.245114 master-0 kubenswrapper[6980]: I0313 12:39:12.245072 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 13 12:39:12.245665 master-0 kubenswrapper[6980]: I0313 12:39:12.245548 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:39:12.245756 master-0 kubenswrapper[6980]: I0313 12:39:12.245715 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 13 12:39:12.245959 master-0 kubenswrapper[6980]: I0313 12:39:12.245938 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:39:12.258697 master-0 kubenswrapper[6980]: I0313 12:39:12.258592 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:39:12.260427 master-0 kubenswrapper[6980]: I0313 12:39:12.259392 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-5b44769c65-4nsbw"] Mar 13 12:39:12.329597 master-0 kubenswrapper[6980]: I0313 12:39:12.329509 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329691 master-0 kubenswrapper[6980]: I0313 12:39:12.329660 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329751 master-0 kubenswrapper[6980]: I0313 12:39:12.329732 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329825 master-0 kubenswrapper[6980]: I0313 12:39:12.329773 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2rz6\" (UniqueName: \"kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329874 master-0 kubenswrapper[6980]: I0313 12:39:12.329824 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329921 master-0 kubenswrapper[6980]: I0313 12:39:12.329898 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.329976 master-0 kubenswrapper[6980]: I0313 12:39:12.329931 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.331838 master-0 kubenswrapper[6980]: I0313 12:39:12.330010 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.331838 master-0 kubenswrapper[6980]: I0313 12:39:12.330091 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.331838 master-0 kubenswrapper[6980]: I0313 12:39:12.330248 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.331838 master-0 kubenswrapper[6980]: I0313 12:39:12.330342 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.431804 master-0 kubenswrapper[6980]: I0313 12:39:12.431736 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.431955 master-0 kubenswrapper[6980]: I0313 12:39:12.431872 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.431955 master-0 kubenswrapper[6980]: I0313 12:39:12.431941 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432061 master-0 kubenswrapper[6980]: I0313 12:39:12.431978 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432061 master-0 kubenswrapper[6980]: I0313 12:39:12.432000 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432061 master-0 kubenswrapper[6980]: I0313 12:39:12.432034 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2rz6\" (UniqueName: \"kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432061 master-0 kubenswrapper[6980]: I0313 12:39:12.432053 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432229 master-0 kubenswrapper[6980]: I0313 12:39:12.432091 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432229 master-0 kubenswrapper[6980]: I0313 12:39:12.432115 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432229 master-0 kubenswrapper[6980]: I0313 12:39:12.432151 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432229 master-0 kubenswrapper[6980]: I0313 12:39:12.432174 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432389 master-0 kubenswrapper[6980]: I0313 12:39:12.432354 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432637 master-0 kubenswrapper[6980]: I0313 12:39:12.432601 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.432972 master-0 kubenswrapper[6980]: E0313 12:39:12.432950 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:12.433036 master-0 kubenswrapper[6980]: E0313 12:39:12.432997 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:12.932983024 +0000 UTC m=+20.266977640 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:12.433179 master-0 kubenswrapper[6980]: I0313 12:39:12.433130 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.433773 master-0 kubenswrapper[6980]: E0313 12:39:12.433715 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:12.433851 master-0 kubenswrapper[6980]: E0313 12:39:12.433838 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:12.93381321 +0000 UTC m=+20.267807886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:12.434122 master-0 kubenswrapper[6980]: I0313 12:39:12.434097 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.434194 master-0 kubenswrapper[6980]: I0313 12:39:12.434176 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.434682 master-0 kubenswrapper[6980]: I0313 12:39:12.434650 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.440386 master-0 kubenswrapper[6980]: I0313 12:39:12.440317 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.447387 master-0 kubenswrapper[6980]: I0313 12:39:12.447312 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.453525 master-0 kubenswrapper[6980]: I0313 12:39:12.453482 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2rz6\" (UniqueName: \"kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.938070 master-0 kubenswrapper[6980]: I0313 12:39:12.938016 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.938292 master-0 kubenswrapper[6980]: I0313 12:39:12.938106 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:12.938292 master-0 kubenswrapper[6980]: E0313 12:39:12.938266 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:12.938379 master-0 kubenswrapper[6980]: E0313 12:39:12.938336 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:13.938314455 +0000 UTC m=+21.272309081 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:12.938831 master-0 kubenswrapper[6980]: E0313 12:39:12.938801 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:12.938900 master-0 kubenswrapper[6980]: E0313 12:39:12.938840 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:13.938832621 +0000 UTC m=+21.272827247 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:12.941590 master-0 kubenswrapper[6980]: I0313 12:39:12.941523 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" event={"ID":"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa","Type":"ContainerStarted","Data":"4b9e882a01cdfbc8bf7760e0d86d536a94312b94c74000951cc0b9a06f2c288b"} Mar 13 12:39:13.777744 master-0 kubenswrapper[6980]: I0313 12:39:13.777301 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh"] Mar 13 12:39:13.778931 master-0 kubenswrapper[6980]: I0313 12:39:13.778296 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:39:13.807406 master-0 kubenswrapper[6980]: I0313 12:39:13.807348 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh"] Mar 13 12:39:13.848855 master-0 kubenswrapper[6980]: I0313 12:39:13.848793 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wqpz\" (UniqueName: \"kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz\") pod \"csi-snapshot-controller-7577d6f48-lf2dh\" (UID: \"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:39:13.948131 master-0 kubenswrapper[6980]: I0313 12:39:13.948070 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"cf479c6d6c1b4d3fb1e4d8c534df6ecd64180a47813aaab693ac30875cb0165f"} Mar 13 12:39:13.949784 master-0 kubenswrapper[6980]: I0313 12:39:13.949746 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:13.949871 master-0 kubenswrapper[6980]: E0313 12:39:13.949852 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:13.949933 master-0 kubenswrapper[6980]: E0313 12:39:13.949914 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:15.949896557 +0000 UTC m=+23.283891183 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:13.950025 master-0 kubenswrapper[6980]: I0313 12:39:13.950003 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:13.950199 master-0 kubenswrapper[6980]: E0313 12:39:13.950168 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:13.950234 master-0 kubenswrapper[6980]: I0313 12:39:13.950214 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqpz\" (UniqueName: \"kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz\") pod \"csi-snapshot-controller-7577d6f48-lf2dh\" (UID: \"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:39:13.950272 master-0 kubenswrapper[6980]: E0313 12:39:13.950243 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:15.950224887 +0000 UTC m=+23.284219573 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:13.976211 master-0 kubenswrapper[6980]: I0313 12:39:13.976156 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqpz\" (UniqueName: \"kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz\") pod \"csi-snapshot-controller-7577d6f48-lf2dh\" (UID: \"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:39:14.094133 master-0 kubenswrapper[6980]: I0313 12:39:14.093995 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: I0313 12:39:14.536551 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: I0313 12:39:14.536901 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: I0313 12:39:14.536959 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: I0313 12:39:14.536977 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.536758 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537202 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:22.537187284 +0000 UTC m=+29.871181910 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537562 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537607 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:22.537599206 +0000 UTC m=+29.871593832 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537635 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537677 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:22.537671699 +0000 UTC m=+29.871666325 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537146 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:14.538117 master-0 kubenswrapper[6980]: E0313 12:39:14.537700 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:22.537695199 +0000 UTC m=+29.871689825 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:14.659674 master-0 kubenswrapper[6980]: I0313 12:39:14.658720 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-cgw5c"] Mar 13 12:39:14.659674 master-0 kubenswrapper[6980]: I0313 12:39:14.659320 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:14.663841 master-0 kubenswrapper[6980]: I0313 12:39:14.662940 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:39:14.663841 master-0 kubenswrapper[6980]: I0313 12:39:14.663252 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:39:14.663841 master-0 kubenswrapper[6980]: I0313 12:39:14.663464 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:39:14.663841 master-0 kubenswrapper[6980]: I0313 12:39:14.663718 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:39:14.675389 master-0 kubenswrapper[6980]: I0313 12:39:14.673670 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-cgw5c"] Mar 13 12:39:15.171371 master-0 kubenswrapper[6980]: I0313 12:39:15.170773 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.171371 master-0 kubenswrapper[6980]: I0313 12:39:15.170890 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.171371 master-0 kubenswrapper[6980]: I0313 12:39:15.170983 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vqww\" (UniqueName: \"kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.252539 master-0 kubenswrapper[6980]: I0313 12:39:15.249889 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh"] Mar 13 12:39:15.275554 master-0 kubenswrapper[6980]: I0313 12:39:15.274215 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.276177 master-0 kubenswrapper[6980]: I0313 12:39:15.276055 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.276543 master-0 kubenswrapper[6980]: I0313 12:39:15.276502 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqww\" (UniqueName: \"kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.277303 master-0 kubenswrapper[6980]: I0313 12:39:15.277233 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:15.377860 master-0 kubenswrapper[6980]: I0313 12:39:15.377587 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5b44769c65-4nsbw"] Mar 13 12:39:15.377860 master-0 kubenswrapper[6980]: E0313 12:39:15.377793 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" podUID="56052b79-bd1c-4d51-9dfa-d9541499e147" Mar 13 12:39:15.988345 master-0 kubenswrapper[6980]: I0313 12:39:15.988281 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:15.988746 master-0 kubenswrapper[6980]: E0313 12:39:15.988463 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:15.988746 master-0 kubenswrapper[6980]: E0313 12:39:15.988556 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:19.988518337 +0000 UTC m=+27.322512963 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:15.988880 master-0 kubenswrapper[6980]: I0313 12:39:15.988751 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:15.988880 master-0 kubenswrapper[6980]: E0313 12:39:15.988824 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:15.988880 master-0 kubenswrapper[6980]: E0313 12:39:15.988852 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:19.988843878 +0000 UTC m=+27.322838624 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:16.537667 master-0 kubenswrapper[6980]: I0313 12:39:16.528063 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:16.553622 master-0 kubenswrapper[6980]: I0313 12:39:16.551203 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:16.727207 master-0 kubenswrapper[6980]: I0313 12:39:16.727128 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727207 master-0 kubenswrapper[6980]: I0313 12:39:16.727203 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2rz6\" (UniqueName: \"kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727257 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727285 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727335 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727396 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727419 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727449 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.727647 master-0 kubenswrapper[6980]: I0313 12:39:16.727475 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir\") pod \"56052b79-bd1c-4d51-9dfa-d9541499e147\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " Mar 13 12:39:16.728614 master-0 kubenswrapper[6980]: I0313 12:39:16.728410 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:16.728925 master-0 kubenswrapper[6980]: I0313 12:39:16.728867 6980 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:16.728993 master-0 kubenswrapper[6980]: I0313 12:39:16.728975 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:16.729163 master-0 kubenswrapper[6980]: I0313 12:39:16.729087 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:16.729772 master-0 kubenswrapper[6980]: I0313 12:39:16.729746 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:16.730235 master-0 kubenswrapper[6980]: I0313 12:39:16.729985 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:16.730340 master-0 kubenswrapper[6980]: I0313 12:39:16.730074 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config" (OuterVolumeSpecName: "config") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:16.830866 master-0 kubenswrapper[6980]: I0313 12:39:16.830747 6980 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:16.830866 master-0 kubenswrapper[6980]: I0313 12:39:16.830788 6980 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:16.830866 master-0 kubenswrapper[6980]: I0313 12:39:16.830801 6980 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:16.830866 master-0 kubenswrapper[6980]: I0313 12:39:16.830814 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:16.830866 master-0 kubenswrapper[6980]: I0313 12:39:16.830827 6980 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/56052b79-bd1c-4d51-9dfa-d9541499e147-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:17.530699 master-0 kubenswrapper[6980]: I0313 12:39:17.530661 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:17.997343 master-0 kubenswrapper[6980]: I0313 12:39:17.997261 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn"] Mar 13 12:39:17.998839 master-0 kubenswrapper[6980]: I0313 12:39:17.998707 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.010645 master-0 kubenswrapper[6980]: I0313 12:39:18.010592 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:39:18.010877 master-0 kubenswrapper[6980]: I0313 12:39:18.010682 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:39:18.011271 master-0 kubenswrapper[6980]: I0313 12:39:18.011244 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:39:18.016418 master-0 kubenswrapper[6980]: I0313 12:39:18.016381 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:39:18.047535 master-0 kubenswrapper[6980]: I0313 12:39:18.047469 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.047535 master-0 kubenswrapper[6980]: I0313 12:39:18.047526 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w97j5\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.047851 master-0 kubenswrapper[6980]: I0313 12:39:18.047588 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.047885 master-0 kubenswrapper[6980]: I0313 12:39:18.047867 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.047965 master-0 kubenswrapper[6980]: I0313 12:39:18.047907 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.048080 master-0 kubenswrapper[6980]: I0313 12:39:18.048039 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.148977 master-0 kubenswrapper[6980]: I0313 12:39:18.148896 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149215 master-0 kubenswrapper[6980]: I0313 12:39:18.149012 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149215 master-0 kubenswrapper[6980]: I0313 12:39:18.149122 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149215 master-0 kubenswrapper[6980]: I0313 12:39:18.149140 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w97j5\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149215 master-0 kubenswrapper[6980]: I0313 12:39:18.149186 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149353 master-0 kubenswrapper[6980]: I0313 12:39:18.149226 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.149353 master-0 kubenswrapper[6980]: I0313 12:39:18.149344 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.150006 master-0 kubenswrapper[6980]: I0313 12:39:18.149974 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.150204 master-0 kubenswrapper[6980]: E0313 12:39:18.150151 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:18.150287 master-0 kubenswrapper[6980]: E0313 12:39:18.150276 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:18.650253405 +0000 UTC m=+25.984248031 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:18.150734 master-0 kubenswrapper[6980]: I0313 12:39:18.150444 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.366017 master-0 kubenswrapper[6980]: I0313 12:39:18.365848 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:39:18.366665 master-0 kubenswrapper[6980]: I0313 12:39:18.366637 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.366867 master-0 kubenswrapper[6980]: I0313 12:39:18.366828 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn"] Mar 13 12:39:18.377010 master-0 kubenswrapper[6980]: I0313 12:39:18.376945 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:39:18.451249 master-0 kubenswrapper[6980]: I0313 12:39:18.451182 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.451982 master-0 kubenswrapper[6980]: I0313 12:39:18.451919 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.452137 master-0 kubenswrapper[6980]: I0313 12:39:18.452094 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.553620 master-0 kubenswrapper[6980]: I0313 12:39:18.553510 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.553926 master-0 kubenswrapper[6980]: I0313 12:39:18.553880 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.554119 master-0 kubenswrapper[6980]: I0313 12:39:18.554043 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.554172 master-0 kubenswrapper[6980]: I0313 12:39:18.554128 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.554172 master-0 kubenswrapper[6980]: I0313 12:39:18.554044 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:18.655263 master-0 kubenswrapper[6980]: I0313 12:39:18.655172 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:18.655669 master-0 kubenswrapper[6980]: E0313 12:39:18.655375 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:18.655669 master-0 kubenswrapper[6980]: E0313 12:39:18.655482 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:19.655456612 +0000 UTC m=+26.989451248 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:18.973971 master-0 kubenswrapper[6980]: I0313 12:39:18.973747 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:39:19.668770 master-0 kubenswrapper[6980]: I0313 12:39:19.668644 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:19.669446 master-0 kubenswrapper[6980]: E0313 12:39:19.668838 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:19.669446 master-0 kubenswrapper[6980]: E0313 12:39:19.668929 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:21.668909491 +0000 UTC m=+29.002904117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:19.923808 master-0 kubenswrapper[6980]: I0313 12:39:19.923666 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn"] Mar 13 12:39:19.924743 master-0 kubenswrapper[6980]: I0313 12:39:19.924611 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:19.927761 master-0 kubenswrapper[6980]: I0313 12:39:19.927718 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:39:19.932826 master-0 kubenswrapper[6980]: I0313 12:39:19.932772 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:39:19.943558 master-0 kubenswrapper[6980]: I0313 12:39:19.943505 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:39:20.072967 master-0 kubenswrapper[6980]: I0313 12:39:20.072859 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:20.072967 master-0 kubenswrapper[6980]: I0313 12:39:20.072972 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.073303 master-0 kubenswrapper[6980]: I0313 12:39:20.073018 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.073303 master-0 kubenswrapper[6980]: E0313 12:39:20.073079 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:20.073303 master-0 kubenswrapper[6980]: E0313 12:39:20.073164 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:28.073148165 +0000 UTC m=+35.407142791 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:20.073303 master-0 kubenswrapper[6980]: I0313 12:39:20.073216 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.073630 master-0 kubenswrapper[6980]: I0313 12:39:20.073486 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.073832 master-0 kubenswrapper[6980]: I0313 12:39:20.073793 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:20.073891 master-0 kubenswrapper[6980]: E0313 12:39:20.073876 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:20.073937 master-0 kubenswrapper[6980]: E0313 12:39:20.073925 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:28.073912249 +0000 UTC m=+35.407907075 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:20.074151 master-0 kubenswrapper[6980]: I0313 12:39:20.074079 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.175741 master-0 kubenswrapper[6980]: I0313 12:39:20.175510 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.175983 master-0 kubenswrapper[6980]: I0313 12:39:20.175750 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.176333 master-0 kubenswrapper[6980]: I0313 12:39:20.176258 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.176409 master-0 kubenswrapper[6980]: I0313 12:39:20.176346 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.176751 master-0 kubenswrapper[6980]: I0313 12:39:20.176631 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.177126 master-0 kubenswrapper[6980]: I0313 12:39:20.177081 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.177592 master-0 kubenswrapper[6980]: I0313 12:39:20.177534 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.178228 master-0 kubenswrapper[6980]: I0313 12:39:20.178188 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:20.235660 master-0 kubenswrapper[6980]: I0313 12:39:20.229675 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn"] Mar 13 12:39:21.709633 master-0 kubenswrapper[6980]: I0313 12:39:21.709544 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:21.710121 master-0 kubenswrapper[6980]: E0313 12:39:21.709779 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:21.710121 master-0 kubenswrapper[6980]: E0313 12:39:21.709844 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:25.709830077 +0000 UTC m=+33.043824703 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:22.236075 master-0 kubenswrapper[6980]: I0313 12:39:22.236001 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:39:22.236603 master-0 kubenswrapper[6980]: I0313 12:39:22.236551 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.239932 master-0 kubenswrapper[6980]: I0313 12:39:22.239877 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 12:39:22.417160 master-0 kubenswrapper[6980]: I0313 12:39:22.417062 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.417491 master-0 kubenswrapper[6980]: I0313 12:39:22.417385 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.417717 master-0 kubenswrapper[6980]: I0313 12:39:22.417640 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.495949 master-0 kubenswrapper[6980]: I0313 12:39:22.495814 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:39:22.519375 master-0 kubenswrapper[6980]: I0313 12:39:22.519300 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.519778 master-0 kubenswrapper[6980]: I0313 12:39:22.519729 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.519867 master-0 kubenswrapper[6980]: I0313 12:39:22.519839 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.520219 master-0 kubenswrapper[6980]: I0313 12:39:22.520159 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.520297 master-0 kubenswrapper[6980]: I0313 12:39:22.520261 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:22.621244 master-0 kubenswrapper[6980]: I0313 12:39:22.621149 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:22.621494 master-0 kubenswrapper[6980]: E0313 12:39:22.621447 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:22.621533 master-0 kubenswrapper[6980]: E0313 12:39:22.621504 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:22.621565 master-0 kubenswrapper[6980]: E0313 12:39:22.621547 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:38.621525088 +0000 UTC m=+45.955519784 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:22.621640 master-0 kubenswrapper[6980]: I0313 12:39:22.621452 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:22.621640 master-0 kubenswrapper[6980]: E0313 12:39:22.621587 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:38.621562439 +0000 UTC m=+45.955557175 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:22.621751 master-0 kubenswrapper[6980]: I0313 12:39:22.621719 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:22.621787 master-0 kubenswrapper[6980]: I0313 12:39:22.621761 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:22.621787 master-0 kubenswrapper[6980]: E0313 12:39:22.621771 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:22.621858 master-0 kubenswrapper[6980]: E0313 12:39:22.621819 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:38.621808246 +0000 UTC m=+45.955802882 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:22.621858 master-0 kubenswrapper[6980]: E0313 12:39:22.621848 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:22.621922 master-0 kubenswrapper[6980]: E0313 12:39:22.621879 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:38.621868768 +0000 UTC m=+45.955863474 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:25.671992 master-0 kubenswrapper[6980]: I0313 12:39:25.671628 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672022 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672064 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672097 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672131 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.671811 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672214 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672197589 +0000 UTC m=+65.006192215 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672154 6980 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672373 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls podName:c1213b50-28bf-43ff-94c4-20616907735b nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672342884 +0000 UTC m=+65.006337580 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls") pod "ingress-operator-677db989d6-9nxcz" (UID: "c1213b50-28bf-43ff-94c4-20616907735b") : secret "metrics-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672383 6980 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672411 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672422 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls podName:f85ab8ab-f9f1-47ad-9c96-9498cef92474 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672409746 +0000 UTC m=+65.006404372 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls") pod "dns-operator-589895fbb7-w7mv2" (UID: "f85ab8ab-f9f1-47ad-9c96-9498cef92474") : secret "metrics-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672459 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672494 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672496 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672280 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672530 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672519789 +0000 UTC m=+65.006514415 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672547 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.67253712 +0000 UTC m=+65.006531836 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672567 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.67255797 +0000 UTC m=+65.006552676 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672513 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672648 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672590 6980 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672695 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672705 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert podName:e3eb38e0-d8b5-46fc-809d-73791d569816 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672691254 +0000 UTC m=+65.006685950 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert") pod "cluster-version-operator-745944c6b7-7rfrg" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816") : secret "cluster-version-operator-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672725 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672742 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672766 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672600 6980 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672784 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: I0313 12:39:25.672797 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672810 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672802398 +0000 UTC m=+65.006797024 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672831 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls podName:16c2d774-967f-4964-ab4e-eb13c4364f63 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672816428 +0000 UTC m=+65.006811144 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-cjq8f" (UID: "16c2d774-967f-4964-ab4e-eb13c4364f63") : secret "image-registry-operator-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672859 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672878 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.67287263 +0000 UTC m=+65.006867246 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "performance-addon-operator-webhook-cert" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672862 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672889 6980 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 12:39:25.672869 master-0 kubenswrapper[6980]: E0313 12:39:25.672902 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672895401 +0000 UTC m=+65.006890027 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:25.674132 master-0 kubenswrapper[6980]: E0313 12:39:25.672986 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls podName:5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672979523 +0000 UTC m=+65.006974149 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-mwnxf" (UID: "5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346") : secret "node-tuning-operator-tls" not found Mar 13 12:39:25.674132 master-0 kubenswrapper[6980]: E0313 12:39:25.673011 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:39:57.672994234 +0000 UTC m=+65.006988860 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:39:25.774455 master-0 kubenswrapper[6980]: I0313 12:39:25.774290 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:25.778283 master-0 kubenswrapper[6980]: E0313 12:39:25.778182 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:25.778560 master-0 kubenswrapper[6980]: E0313 12:39:25.778437 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:33.778398503 +0000 UTC m=+41.112393129 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:28.101884 master-0 kubenswrapper[6980]: I0313 12:39:28.101810 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:28.102542 master-0 kubenswrapper[6980]: E0313 12:39:28.101982 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:28.102542 master-0 kubenswrapper[6980]: E0313 12:39:28.102209 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:44.102186135 +0000 UTC m=+51.436180761 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:28.102542 master-0 kubenswrapper[6980]: I0313 12:39:28.102199 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:28.102542 master-0 kubenswrapper[6980]: E0313 12:39:28.102352 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:28.102542 master-0 kubenswrapper[6980]: E0313 12:39:28.102460 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:44.102438313 +0000 UTC m=+51.436432939 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:28.786952 master-0 kubenswrapper[6980]: I0313 12:39:28.786900 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:39:33.781447 master-0 kubenswrapper[6980]: I0313 12:39:33.781273 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:33.783300 master-0 kubenswrapper[6980]: E0313 12:39:33.781589 6980 secret.go:189] Couldn't get secret openshift-catalogd/catalogserver-cert: secret "catalogserver-cert" not found Mar 13 12:39:33.783300 master-0 kubenswrapper[6980]: E0313 12:39:33.781688 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs podName:a8c840d1-8047-4ad6-a990-3ab119ae1cc5 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:49.781663363 +0000 UTC m=+57.115657979 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "catalogserver-certs" (UniqueName: "kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs") pod "catalogd-controller-manager-7f8b8b6f4c-lwxxn" (UID: "a8c840d1-8047-4ad6-a990-3ab119ae1cc5") : secret "catalogserver-cert" not found Mar 13 12:39:38.666602 master-0 kubenswrapper[6980]: I0313 12:39:38.666453 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.666646 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: I0313 12:39:38.666661 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.666732 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:10.666709182 +0000 UTC m=+78.000703828 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : configmap "client-ca" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.666781 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.666835 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:10.666821295 +0000 UTC m=+78.000815921 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : secret "serving-cert" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: I0313 12:39:38.666829 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.666888 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: I0313 12:39:38.666962 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") pod \"route-controller-manager-6d757555cb-bghw7\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.667051 6980 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.667086 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:10.667072363 +0000 UTC m=+78.001066989 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : configmap "client-ca" not found Mar 13 12:39:38.668079 master-0 kubenswrapper[6980]: E0313 12:39:38.667103 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert podName:1fbbd210-9e05-43b6-a0bd-adc51a1cf248 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:10.667096264 +0000 UTC m=+78.001090890 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert") pod "route-controller-manager-6d757555cb-bghw7" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248") : secret "serving-cert" not found Mar 13 12:39:40.052468 master-0 kubenswrapper[6980]: I0313 12:39:40.052372 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k"] Mar 13 12:39:40.055484 master-0 kubenswrapper[6980]: I0313 12:39:40.055414 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669d874ccc-8rrvh"] Mar 13 12:39:40.055825 master-0 kubenswrapper[6980]: E0313 12:39:40.055781 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" podUID="425e18c5-3d11-4f04-be33-45fa3f035129" Mar 13 12:39:40.055939 master-0 kubenswrapper[6980]: I0313 12:39:40.055918 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.059361 master-0 kubenswrapper[6980]: I0313 12:39:40.059299 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:39:40.059640 master-0 kubenswrapper[6980]: I0313 12:39:40.059304 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:39:40.062206 master-0 kubenswrapper[6980]: I0313 12:39:40.062142 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k"] Mar 13 12:39:40.064261 master-0 kubenswrapper[6980]: I0313 12:39:40.064206 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:39:40.064789 master-0 kubenswrapper[6980]: I0313 12:39:40.064751 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:39:40.064867 master-0 kubenswrapper[6980]: I0313 12:39:40.064829 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:39:40.065176 master-0 kubenswrapper[6980]: I0313 12:39:40.065153 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:39:40.065245 master-0 kubenswrapper[6980]: I0313 12:39:40.065177 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:39:40.066046 master-0 kubenswrapper[6980]: I0313 12:39:40.065965 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:39:40.099824 master-0 kubenswrapper[6980]: I0313 12:39:40.099776 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7"] Mar 13 12:39:40.100094 master-0 kubenswrapper[6980]: E0313 12:39:40.100036 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" podUID="1fbbd210-9e05-43b6-a0bd-adc51a1cf248" Mar 13 12:39:40.149165 master-0 kubenswrapper[6980]: I0313 12:39:40.149090 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149165 master-0 kubenswrapper[6980]: I0313 12:39:40.149141 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149451 master-0 kubenswrapper[6980]: I0313 12:39:40.149229 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149451 master-0 kubenswrapper[6980]: I0313 12:39:40.149327 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149451 master-0 kubenswrapper[6980]: I0313 12:39:40.149376 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vgd4v\" (UniqueName: \"kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149594 master-0 kubenswrapper[6980]: I0313 12:39:40.149491 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149760 master-0 kubenswrapper[6980]: I0313 12:39:40.149696 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.149924 master-0 kubenswrapper[6980]: I0313 12:39:40.149898 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.250654 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251280 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251397 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251427 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251471 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251525 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251567 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgd4v\" (UniqueName: \"kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.251990 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.253013 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: E0313 12:39:40.253126 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: E0313 12:39:40.253186 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:40.753165246 +0000 UTC m=+48.087159872 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.254137 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.254209 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.257938 master-0 kubenswrapper[6980]: I0313 12:39:40.254718 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.480311 master-0 kubenswrapper[6980]: I0313 12:39:40.480215 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:40.480311 master-0 kubenswrapper[6980]: I0313 12:39:40.480287 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:40.486607 master-0 kubenswrapper[6980]: I0313 12:39:40.486543 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:40.490990 master-0 kubenswrapper[6980]: I0313 12:39:40.490951 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:40.554405 master-0 kubenswrapper[6980]: I0313 12:39:40.554338 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles\") pod \"425e18c5-3d11-4f04-be33-45fa3f035129\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " Mar 13 12:39:40.554705 master-0 kubenswrapper[6980]: I0313 12:39:40.554426 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config\") pod \"425e18c5-3d11-4f04-be33-45fa3f035129\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " Mar 13 12:39:40.554705 master-0 kubenswrapper[6980]: I0313 12:39:40.554467 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqmnl\" (UniqueName: \"kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl\") pod \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " Mar 13 12:39:40.554705 master-0 kubenswrapper[6980]: I0313 12:39:40.554503 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config\") pod \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\" (UID: \"1fbbd210-9e05-43b6-a0bd-adc51a1cf248\") " Mar 13 12:39:40.554705 master-0 kubenswrapper[6980]: I0313 12:39:40.554525 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpb4l\" (UniqueName: \"kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l\") pod \"425e18c5-3d11-4f04-be33-45fa3f035129\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " Mar 13 12:39:40.555537 master-0 kubenswrapper[6980]: I0313 12:39:40.555231 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config" (OuterVolumeSpecName: "config") pod "425e18c5-3d11-4f04-be33-45fa3f035129" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:40.555715 master-0 kubenswrapper[6980]: I0313 12:39:40.555664 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "425e18c5-3d11-4f04-be33-45fa3f035129" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:40.555838 master-0 kubenswrapper[6980]: I0313 12:39:40.555780 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config" (OuterVolumeSpecName: "config") pod "1fbbd210-9e05-43b6-a0bd-adc51a1cf248" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:39:40.656109 master-0 kubenswrapper[6980]: I0313 12:39:40.655759 6980 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:40.656109 master-0 kubenswrapper[6980]: I0313 12:39:40.656091 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:40.656109 master-0 kubenswrapper[6980]: I0313 12:39:40.656106 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:40.757642 master-0 kubenswrapper[6980]: I0313 12:39:40.757462 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:40.757924 master-0 kubenswrapper[6980]: E0313 12:39:40.757727 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:40.757924 master-0 kubenswrapper[6980]: E0313 12:39:40.757834 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:41.757807335 +0000 UTC m=+49.091802021 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:41.484671 master-0 kubenswrapper[6980]: I0313 12:39:41.484609 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7" Mar 13 12:39:41.485244 master-0 kubenswrapper[6980]: I0313 12:39:41.484889 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:39:41.772498 master-0 kubenswrapper[6980]: I0313 12:39:41.772229 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:41.772796 master-0 kubenswrapper[6980]: E0313 12:39:41.772533 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:41.772796 master-0 kubenswrapper[6980]: E0313 12:39:41.772697 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:43.772671837 +0000 UTC m=+51.106666673 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:43.315042 master-0 kubenswrapper[6980]: I0313 12:39:43.314972 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:39:43.315848 master-0 kubenswrapper[6980]: E0313 12:39:43.315257 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-kube-scheduler/installer-1-master-0" podUID="ef1b69c9-eb05-43b5-9da3-3e96430879ee" Mar 13 12:39:43.491543 master-0 kubenswrapper[6980]: I0313 12:39:43.491463 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:43.499170 master-0 kubenswrapper[6980]: I0313 12:39:43.499106 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:43.596159 master-0 kubenswrapper[6980]: I0313 12:39:43.595983 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir\") pod \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " Mar 13 12:39:43.596159 master-0 kubenswrapper[6980]: I0313 12:39:43.596111 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ef1b69c9-eb05-43b5-9da3-3e96430879ee" (UID: "ef1b69c9-eb05-43b5-9da3-3e96430879ee"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:43.596400 master-0 kubenswrapper[6980]: I0313 12:39:43.596161 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock\") pod \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " Mar 13 12:39:43.596400 master-0 kubenswrapper[6980]: I0313 12:39:43.596219 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock" (OuterVolumeSpecName: "var-lock") pod "ef1b69c9-eb05-43b5-9da3-3e96430879ee" (UID: "ef1b69c9-eb05-43b5-9da3-3e96430879ee"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:39:43.597104 master-0 kubenswrapper[6980]: I0313 12:39:43.597056 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:43.597168 master-0 kubenswrapper[6980]: I0313 12:39:43.597107 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ef1b69c9-eb05-43b5-9da3-3e96430879ee-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:43.799978 master-0 kubenswrapper[6980]: I0313 12:39:43.799876 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:43.800352 master-0 kubenswrapper[6980]: E0313 12:39:43.800207 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:43.800729 master-0 kubenswrapper[6980]: E0313 12:39:43.800666 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:47.800485091 +0000 UTC m=+55.134479757 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:44.105947 master-0 kubenswrapper[6980]: I0313 12:39:44.105884 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:44.106382 master-0 kubenswrapper[6980]: I0313 12:39:44.106010 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") pod \"apiserver-5b44769c65-4nsbw\" (UID: \"56052b79-bd1c-4d51-9dfa-d9541499e147\") " pod="openshift-apiserver/apiserver-5b44769c65-4nsbw" Mar 13 12:39:44.106382 master-0 kubenswrapper[6980]: E0313 12:39:44.106135 6980 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 12:39:44.106382 master-0 kubenswrapper[6980]: E0313 12:39:44.106245 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:16.106220938 +0000 UTC m=+83.440215594 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : configmap "audit-0" not found Mar 13 12:39:44.106382 master-0 kubenswrapper[6980]: E0313 12:39:44.106346 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:44.106652 master-0 kubenswrapper[6980]: E0313 12:39:44.106434 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert podName:56052b79-bd1c-4d51-9dfa-d9541499e147 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:16.106418454 +0000 UTC m=+83.440413080 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert") pod "apiserver-5b44769c65-4nsbw" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147") : secret "serving-cert" not found Mar 13 12:39:44.520756 master-0 kubenswrapper[6980]: I0313 12:39:44.520691 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:44.556326 master-0 kubenswrapper[6980]: I0313 12:39:44.556269 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:39:44.560861 master-0 kubenswrapper[6980]: I0313 12:39:44.560193 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 12:39:46.163519 master-0 kubenswrapper[6980]: I0313 12:39:46.163394 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:39:46.164940 master-0 kubenswrapper[6980]: I0313 12:39:46.163736 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:46.164940 master-0 kubenswrapper[6980]: I0313 12:39:46.164267 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqww\" (UniqueName: \"kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:46.165698 master-0 kubenswrapper[6980]: I0313 12:39:46.165204 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w97j5\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:46.165698 master-0 kubenswrapper[6980]: I0313 12:39:46.165351 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:46.165698 master-0 kubenswrapper[6980]: I0313 12:39:46.165436 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:46.165698 master-0 kubenswrapper[6980]: I0313 12:39:46.165519 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6" (OuterVolumeSpecName: "kube-api-access-h2rz6") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "kube-api-access-h2rz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:46.166941 master-0 kubenswrapper[6980]: I0313 12:39:46.166053 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:46.166941 master-0 kubenswrapper[6980]: I0313 12:39:46.166239 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access\") pod \"installer-1-master-0\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 12:39:46.166941 master-0 kubenswrapper[6980]: I0313 12:39:46.166417 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "56052b79-bd1c-4d51-9dfa-d9541499e147" (UID: "56052b79-bd1c-4d51-9dfa-d9541499e147"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:39:46.166941 master-0 kubenswrapper[6980]: I0313 12:39:46.166856 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:46.167270 master-0 kubenswrapper[6980]: I0313 12:39:46.167228 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:46.171960 master-0 kubenswrapper[6980]: E0313 12:39:46.168452 6980 kubelet_volumes.go:263] "There were many similar errors. Turn up verbosity to see them." err="orphaned pod \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\" found, but failed to rmdir() volume at path /var/lib/kubelet/pods/ef1b69c9-eb05-43b5-9da3-3e96430879ee/volumes/kubernetes.io~projected/kube-api-access: device or resource busy" numErrs=2 Mar 13 12:39:46.171960 master-0 kubenswrapper[6980]: I0313 12:39:46.169006 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l" (OuterVolumeSpecName: "kube-api-access-fpb4l") pod "425e18c5-3d11-4f04-be33-45fa3f035129" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129"). InnerVolumeSpecName "kube-api-access-fpb4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:46.171960 master-0 kubenswrapper[6980]: I0313 12:39:46.169392 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:46.171960 master-0 kubenswrapper[6980]: I0313 12:39:46.169443 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgd4v\" (UniqueName: \"kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:46.171960 master-0 kubenswrapper[6980]: I0313 12:39:46.171851 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl" (OuterVolumeSpecName: "kube-api-access-pqmnl") pod "1fbbd210-9e05-43b6-a0bd-adc51a1cf248" (UID: "1fbbd210-9e05-43b6-a0bd-adc51a1cf248"). InnerVolumeSpecName "kube-api-access-pqmnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:46.191249 master-0 kubenswrapper[6980]: I0313 12:39:46.190963 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:39:46.269850 master-0 kubenswrapper[6980]: I0313 12:39:46.269765 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access\") pod \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\" (UID: \"ef1b69c9-eb05-43b5-9da3-3e96430879ee\") " Mar 13 12:39:46.270373 master-0 kubenswrapper[6980]: I0313 12:39:46.270339 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fpb4l\" (UniqueName: \"kubernetes.io/projected/425e18c5-3d11-4f04-be33-45fa3f035129-kube-api-access-fpb4l\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.270373 master-0 kubenswrapper[6980]: I0313 12:39:46.270372 6980 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.270525 master-0 kubenswrapper[6980]: I0313 12:39:46.270395 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2rz6\" (UniqueName: \"kubernetes.io/projected/56052b79-bd1c-4d51-9dfa-d9541499e147-kube-api-access-h2rz6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.270525 master-0 kubenswrapper[6980]: I0313 12:39:46.270415 6980 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.270525 master-0 kubenswrapper[6980]: I0313 12:39:46.270434 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqmnl\" (UniqueName: \"kubernetes.io/projected/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-kube-api-access-pqmnl\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.271216 master-0 kubenswrapper[6980]: I0313 12:39:46.271151 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:39:46.297708 master-0 kubenswrapper[6980]: E0313 12:39:46.297154 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.439s" Mar 13 12:39:46.297708 master-0 kubenswrapper[6980]: I0313 12:39:46.297187 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:39:46.299162 master-0 kubenswrapper[6980]: I0313 12:39:46.297886 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:39:46.299162 master-0 kubenswrapper[6980]: I0313 12:39:46.298035 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.300294 master-0 kubenswrapper[6980]: I0313 12:39:46.300236 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ef1b69c9-eb05-43b5-9da3-3e96430879ee" (UID: "ef1b69c9-eb05-43b5-9da3-3e96430879ee"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:39:46.302495 master-0 kubenswrapper[6980]: I0313 12:39:46.302464 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:39:46.340534 master-0 kubenswrapper[6980]: I0313 12:39:46.340481 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:46.373403 master-0 kubenswrapper[6980]: I0313 12:39:46.373348 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ef1b69c9-eb05-43b5-9da3-3e96430879ee-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.378798 master-0 kubenswrapper[6980]: I0313 12:39:46.378741 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7"] Mar 13 12:39:46.379426 master-0 kubenswrapper[6980]: I0313 12:39:46.379392 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6d757555cb-bghw7"] Mar 13 12:39:46.410493 master-0 kubenswrapper[6980]: I0313 12:39:46.410435 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-5b44769c65-4nsbw"] Mar 13 12:39:46.427853 master-0 kubenswrapper[6980]: I0313 12:39:46.427635 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-5b44769c65-4nsbw"] Mar 13 12:39:46.475462 master-0 kubenswrapper[6980]: I0313 12:39:46.474270 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.475462 master-0 kubenswrapper[6980]: I0313 12:39:46.474371 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.475462 master-0 kubenswrapper[6980]: I0313 12:39:46.474506 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.517439 master-0 kubenswrapper[6980]: I0313 12:39:46.517186 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-cgw5c"] Mar 13 12:39:46.524632 master-0 kubenswrapper[6980]: I0313 12:39:46.522079 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 12:39:46.536545 master-0 kubenswrapper[6980]: I0313 12:39:46.536456 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"2f485ea5123a1d0182412387178e57b07dfd142ef3af3f80ba71084ac36459bd"} Mar 13 12:39:46.537887 master-0 kubenswrapper[6980]: W0313 12:39:46.537773 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1e9803a4_a166_42dc_9498_57e213602684.slice/crio-ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6 WatchSource:0}: Error finding container ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6: Status 404 returned error can't find the container with id ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6 Mar 13 12:39:46.542604 master-0 kubenswrapper[6980]: W0313 12:39:46.542528 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7028b88a_ef6e_47f7_bbd7_cf798efdded5.slice/crio-0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782 WatchSource:0}: Error finding container 0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782: Status 404 returned error can't find the container with id 0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782 Mar 13 12:39:46.580110 master-0 kubenswrapper[6980]: I0313 12:39:46.579983 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.580110 master-0 kubenswrapper[6980]: I0313 12:39:46.580116 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.580383 master-0 kubenswrapper[6980]: I0313 12:39:46.580156 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580450 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580501 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580592 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580609 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1fbbd210-9e05-43b6-a0bd-adc51a1cf248-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580703 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/56052b79-bd1c-4d51-9dfa-d9541499e147-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.580753 master-0 kubenswrapper[6980]: I0313 12:39:46.580731 6980 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/56052b79-bd1c-4d51-9dfa-d9541499e147-audit\") on node \"master-0\" DevicePath \"\"" Mar 13 12:39:46.581526 master-0 kubenswrapper[6980]: I0313 12:39:46.581454 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn"] Mar 13 12:39:46.615953 master-0 kubenswrapper[6980]: I0313 12:39:46.615698 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.647227 master-0 kubenswrapper[6980]: I0313 12:39:46.647103 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:39:46.771829 master-0 kubenswrapper[6980]: W0313 12:39:46.771762 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeda319d8_825a_4881_96a9_5386b87f8a4f.slice/crio-af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c WatchSource:0}: Error finding container af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c: Status 404 returned error can't find the container with id af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c Mar 13 12:39:46.866991 master-0 kubenswrapper[6980]: I0313 12:39:46.866933 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1fbbd210-9e05-43b6-a0bd-adc51a1cf248" path="/var/lib/kubelet/pods/1fbbd210-9e05-43b6-a0bd-adc51a1cf248/volumes" Mar 13 12:39:46.867448 master-0 kubenswrapper[6980]: I0313 12:39:46.867370 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="56052b79-bd1c-4d51-9dfa-d9541499e147" path="/var/lib/kubelet/pods/56052b79-bd1c-4d51-9dfa-d9541499e147/volumes" Mar 13 12:39:46.867827 master-0 kubenswrapper[6980]: I0313 12:39:46.867737 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1b69c9-eb05-43b5-9da3-3e96430879ee" path="/var/lib/kubelet/pods/ef1b69c9-eb05-43b5-9da3-3e96430879ee/volumes" Mar 13 12:39:47.052411 master-0 kubenswrapper[6980]: I0313 12:39:47.052358 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:39:47.432509 master-0 kubenswrapper[6980]: I0313 12:39:47.431686 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-8459d5b549-n9fzj"] Mar 13 12:39:47.433542 master-0 kubenswrapper[6980]: I0313 12:39:47.432999 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.435555 master-0 kubenswrapper[6980]: I0313 12:39:47.435513 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b"] Mar 13 12:39:47.436341 master-0 kubenswrapper[6980]: I0313 12:39:47.436010 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445426 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445470 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445570 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445607 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445695 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445770 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445818 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445911 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.445935 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.446007 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.446154 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.446199 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.446260 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:39:47.447601 master-0 kubenswrapper[6980]: I0313 12:39:47.446325 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:39:47.448356 master-0 kubenswrapper[6980]: I0313 12:39:47.448229 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8459d5b549-n9fzj"] Mar 13 12:39:47.449673 master-0 kubenswrapper[6980]: I0313 12:39:47.449377 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:39:47.470085 master-0 kubenswrapper[6980]: I0313 12:39:47.468497 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b"] Mar 13 12:39:47.554103 master-0 kubenswrapper[6980]: I0313 12:39:47.553923 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" event={"ID":"eda319d8-825a-4881-96a9-5386b87f8a4f","Type":"ContainerStarted","Data":"af07eb0bfb5662fdce4cc60c3ec1d13fa870a7ff683788d887fee1f0a6eb9f68"} Mar 13 12:39:47.554103 master-0 kubenswrapper[6980]: I0313 12:39:47.553985 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" event={"ID":"eda319d8-825a-4881-96a9-5386b87f8a4f","Type":"ContainerStarted","Data":"cbb2865534497635b5ca625e2074d592be0ad7241931d751a9044f1c282a4c0f"} Mar 13 12:39:47.554103 master-0 kubenswrapper[6980]: I0313 12:39:47.554000 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" event={"ID":"eda319d8-825a-4881-96a9-5386b87f8a4f","Type":"ContainerStarted","Data":"af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c"} Mar 13 12:39:47.554103 master-0 kubenswrapper[6980]: I0313 12:39:47.554068 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:47.555571 master-0 kubenswrapper[6980]: I0313 12:39:47.555517 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f52d50d6-44fd-47d2-bca6-77be37c69694","Type":"ContainerStarted","Data":"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083"} Mar 13 12:39:47.555666 master-0 kubenswrapper[6980]: I0313 12:39:47.555589 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f52d50d6-44fd-47d2-bca6-77be37c69694","Type":"ContainerStarted","Data":"5d430aa8dbd7c3018a7e05ad11fe92ea7c8db90db9b0a43b068c0c9e5ee73025"} Mar 13 12:39:47.558069 master-0 kubenswrapper[6980]: I0313 12:39:47.558020 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"7028b88a-ef6e-47f7-bbd7-cf798efdded5","Type":"ContainerStarted","Data":"79cd707206ff99c36a959e487c7685688d55e645d476231af44713218abe6dab"} Mar 13 12:39:47.558069 master-0 kubenswrapper[6980]: I0313 12:39:47.558059 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"7028b88a-ef6e-47f7-bbd7-cf798efdded5","Type":"ContainerStarted","Data":"0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782"} Mar 13 12:39:47.559832 master-0 kubenswrapper[6980]: I0313 12:39:47.559791 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" event={"ID":"1e9803a4-a166-42dc-9498-57e213602684","Type":"ContainerStarted","Data":"0b8ffb9009d34dca0914bb1efe6a7d4b6106f10f28097f2ee3fe0b233ae17b98"} Mar 13 12:39:47.559832 master-0 kubenswrapper[6980]: I0313 12:39:47.559819 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" event={"ID":"1e9803a4-a166-42dc-9498-57e213602684","Type":"ContainerStarted","Data":"ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6"} Mar 13 12:39:47.589943 master-0 kubenswrapper[6980]: I0313 12:39:47.589823 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" podStartSLOduration=28.589790412 podStartE2EDuration="28.589790412s" podCreationTimestamp="2026-03-13 12:39:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:47.572496552 +0000 UTC m=+54.906491178" watchObservedRunningTime="2026-03-13 12:39:47.589790412 +0000 UTC m=+54.923785038" Mar 13 12:39:47.590220 master-0 kubenswrapper[6980]: I0313 12:39:47.590177 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" podStartSLOduration=33.590167974 podStartE2EDuration="33.590167974s" podCreationTimestamp="2026-03-13 12:39:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:47.586806901 +0000 UTC m=+54.920801527" watchObservedRunningTime="2026-03-13 12:39:47.590167974 +0000 UTC m=+54.924162600" Mar 13 12:39:47.596310 master-0 kubenswrapper[6980]: I0313 12:39:47.596217 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596310 master-0 kubenswrapper[6980]: I0313 12:39:47.596284 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596321 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596399 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596426 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9v7c\" (UniqueName: \"kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596451 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596481 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596519 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.596547 master-0 kubenswrapper[6980]: I0313 12:39:47.596541 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.597008 master-0 kubenswrapper[6980]: I0313 12:39:47.596656 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.597008 master-0 kubenswrapper[6980]: I0313 12:39:47.596832 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.597008 master-0 kubenswrapper[6980]: I0313 12:39:47.596861 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.597825 master-0 kubenswrapper[6980]: I0313 12:39:47.597014 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.597825 master-0 kubenswrapper[6980]: I0313 12:39:47.597778 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpjj6\" (UniqueName: \"kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.597969 master-0 kubenswrapper[6980]: I0313 12:39:47.597897 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.623815 master-0 kubenswrapper[6980]: I0313 12:39:47.623727 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=26.623706761 podStartE2EDuration="26.623706761s" podCreationTimestamp="2026-03-13 12:39:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:47.603840443 +0000 UTC m=+54.937835069" watchObservedRunningTime="2026-03-13 12:39:47.623706761 +0000 UTC m=+54.957701387" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.700857 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701024 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701060 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9v7c\" (UniqueName: \"kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701093 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701140 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701162 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701180 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701204 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701280 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.702613 master-0 kubenswrapper[6980]: I0313 12:39:47.701901 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.812388 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.812543 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.812592 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjj6\" (UniqueName: \"kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.812875 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: E0313 12:39:47.812981 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.813030 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.813054 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: E0313 12:39:47.814670 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: E0313 12:39:47.814813 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert podName:96909d88-6a1b-4b24-854d-724c0d3f2ad9 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:48.314782505 +0000 UTC m=+55.648777131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert") pod "route-controller-manager-6c5bd84bb8-t9q8b" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9") : secret "serving-cert" not found Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.814880 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: I0313 12:39:47.814907 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.815469 master-0 kubenswrapper[6980]: E0313 12:39:47.815213 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert podName:7574e950-de2e-4f90-99d0-eae3b45cd900 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:48.315186077 +0000 UTC m=+55.649180703 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert") pod "apiserver-8459d5b549-n9fzj" (UID: "7574e950-de2e-4f90-99d0-eae3b45cd900") : secret "serving-cert" not found Mar 13 12:39:47.816107 master-0 kubenswrapper[6980]: I0313 12:39:47.815599 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.816107 master-0 kubenswrapper[6980]: I0313 12:39:47.815870 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.817855 master-0 kubenswrapper[6980]: I0313 12:39:47.816280 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.817855 master-0 kubenswrapper[6980]: I0313 12:39:47.816563 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.819478 master-0 kubenswrapper[6980]: I0313 12:39:47.819440 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.819868 master-0 kubenswrapper[6980]: I0313 12:39:47.819820 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.824675 master-0 kubenswrapper[6980]: I0313 12:39:47.823337 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.824675 master-0 kubenswrapper[6980]: I0313 12:39:47.824378 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.843805 master-0 kubenswrapper[6980]: I0313 12:39:47.843554 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjj6\" (UniqueName: \"kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:47.844103 master-0 kubenswrapper[6980]: I0313 12:39:47.844061 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9v7c\" (UniqueName: \"kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:47.918457 master-0 kubenswrapper[6980]: I0313 12:39:47.916412 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:47.918457 master-0 kubenswrapper[6980]: E0313 12:39:47.916675 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:47.918457 master-0 kubenswrapper[6980]: E0313 12:39:47.916738 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:39:55.916718958 +0000 UTC m=+63.250713584 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: I0313 12:39:48.323824 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: E0313 12:39:48.324232 6980 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: I0313 12:39:48.324247 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: E0313 12:39:48.324328 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert podName:7574e950-de2e-4f90-99d0-eae3b45cd900 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:49.324305305 +0000 UTC m=+56.658299961 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert") pod "apiserver-8459d5b549-n9fzj" (UID: "7574e950-de2e-4f90-99d0-eae3b45cd900") : secret "serving-cert" not found Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: E0313 12:39:48.324534 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:48.326629 master-0 kubenswrapper[6980]: E0313 12:39:48.324618 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert podName:96909d88-6a1b-4b24-854d-724c0d3f2ad9 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:49.324598103 +0000 UTC m=+56.658592739 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert") pod "route-controller-manager-6c5bd84bb8-t9q8b" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9") : secret "serving-cert" not found Mar 13 12:39:48.565982 master-0 kubenswrapper[6980]: I0313 12:39:48.565809 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-456r5" event={"ID":"2b5ab386-14ed-4610-a08a-54b6de877603","Type":"ContainerStarted","Data":"b3a957233e7491f049f9c02ecd8877056ceaa3cea68bd2be407ee549feb00f31"} Mar 13 12:39:48.580593 master-0 kubenswrapper[6980]: I0313 12:39:48.580488 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=3.580464302 podStartE2EDuration="3.580464302s" podCreationTimestamp="2026-03-13 12:39:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:47.626828937 +0000 UTC m=+54.960823563" watchObservedRunningTime="2026-03-13 12:39:48.580464302 +0000 UTC m=+55.914458928" Mar 13 12:39:49.347085 master-0 kubenswrapper[6980]: I0313 12:39:49.346893 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:49.347272 master-0 kubenswrapper[6980]: E0313 12:39:49.347102 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:49.347272 master-0 kubenswrapper[6980]: E0313 12:39:49.347197 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert podName:96909d88-6a1b-4b24-854d-724c0d3f2ad9 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:51.347177492 +0000 UTC m=+58.681172118 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert") pod "route-controller-manager-6c5bd84bb8-t9q8b" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9") : secret "serving-cert" not found Mar 13 12:39:49.347339 master-0 kubenswrapper[6980]: I0313 12:39:49.347278 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:49.353775 master-0 kubenswrapper[6980]: I0313 12:39:49.353726 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:49.564873 master-0 kubenswrapper[6980]: I0313 12:39:49.564803 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:49.570054 master-0 kubenswrapper[6980]: I0313 12:39:49.570003 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e"} Mar 13 12:39:49.591624 master-0 kubenswrapper[6980]: I0313 12:39:49.591471 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podStartSLOduration=33.844799938 podStartE2EDuration="36.591437095s" podCreationTimestamp="2026-03-13 12:39:13 +0000 UTC" firstStartedPulling="2026-03-13 12:39:46.173715789 +0000 UTC m=+53.507710415" lastFinishedPulling="2026-03-13 12:39:48.920352946 +0000 UTC m=+56.254347572" observedRunningTime="2026-03-13 12:39:49.586408871 +0000 UTC m=+56.920403517" watchObservedRunningTime="2026-03-13 12:39:49.591437095 +0000 UTC m=+56.925431741" Mar 13 12:39:49.778533 master-0 kubenswrapper[6980]: I0313 12:39:49.777611 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-8459d5b549-n9fzj"] Mar 13 12:39:49.785553 master-0 kubenswrapper[6980]: W0313 12:39:49.785516 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7574e950_de2e_4f90_99d0_eae3b45cd900.slice/crio-c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5 WatchSource:0}: Error finding container c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5: Status 404 returned error can't find the container with id c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5 Mar 13 12:39:49.853249 master-0 kubenswrapper[6980]: I0313 12:39:49.853192 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:49.857214 master-0 kubenswrapper[6980]: I0313 12:39:49.857185 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:50.128678 master-0 kubenswrapper[6980]: I0313 12:39:50.128502 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:50.308666 master-0 kubenswrapper[6980]: I0313 12:39:50.306500 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn"] Mar 13 12:39:50.324438 master-0 kubenswrapper[6980]: W0313 12:39:50.324370 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8c840d1_8047_4ad6_a990_3ab119ae1cc5.slice/crio-1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6 WatchSource:0}: Error finding container 1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6: Status 404 returned error can't find the container with id 1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6 Mar 13 12:39:50.586014 master-0 kubenswrapper[6980]: I0313 12:39:50.585963 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" event={"ID":"a8c840d1-8047-4ad6-a990-3ab119ae1cc5","Type":"ContainerStarted","Data":"d5ee80dd4d821b3e4453f7f4669fe58ec5fe184bba449fafcb2349ae6e7f4431"} Mar 13 12:39:50.586014 master-0 kubenswrapper[6980]: I0313 12:39:50.586015 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" event={"ID":"a8c840d1-8047-4ad6-a990-3ab119ae1cc5","Type":"ContainerStarted","Data":"1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6"} Mar 13 12:39:50.587291 master-0 kubenswrapper[6980]: I0313 12:39:50.587243 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" event={"ID":"7574e950-de2e-4f90-99d0-eae3b45cd900","Type":"ContainerStarted","Data":"c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5"} Mar 13 12:39:51.375110 master-0 kubenswrapper[6980]: I0313 12:39:51.374745 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:51.375424 master-0 kubenswrapper[6980]: E0313 12:39:51.374981 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:51.375424 master-0 kubenswrapper[6980]: E0313 12:39:51.375258 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert podName:96909d88-6a1b-4b24-854d-724c0d3f2ad9 nodeName:}" failed. No retries permitted until 2026-03-13 12:39:55.375231424 +0000 UTC m=+62.709226050 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert") pod "route-controller-manager-6c5bd84bb8-t9q8b" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9") : secret "serving-cert" not found Mar 13 12:39:51.555612 master-0 kubenswrapper[6980]: I0313 12:39:51.554658 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:39:51.555612 master-0 kubenswrapper[6980]: I0313 12:39:51.555322 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.563613 master-0 kubenswrapper[6980]: I0313 12:39:51.559534 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:39:51.577355 master-0 kubenswrapper[6980]: I0313 12:39:51.576316 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:39:51.582612 master-0 kubenswrapper[6980]: I0313 12:39:51.578144 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.582612 master-0 kubenswrapper[6980]: I0313 12:39:51.578266 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.582612 master-0 kubenswrapper[6980]: I0313 12:39:51.578315 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.600763 master-0 kubenswrapper[6980]: I0313 12:39:51.600698 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" event={"ID":"a8c840d1-8047-4ad6-a990-3ab119ae1cc5","Type":"ContainerStarted","Data":"30b31c049d6bbc747c9d176a9321b53f132ec100e2bcb266f862f58f0efabb73"} Mar 13 12:39:51.601380 master-0 kubenswrapper[6980]: I0313 12:39:51.600869 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:39:51.620722 master-0 kubenswrapper[6980]: I0313 12:39:51.619181 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" podStartSLOduration=34.619157027 podStartE2EDuration="34.619157027s" podCreationTimestamp="2026-03-13 12:39:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:51.61827067 +0000 UTC m=+58.952265316" watchObservedRunningTime="2026-03-13 12:39:51.619157027 +0000 UTC m=+58.953151653" Mar 13 12:39:51.740203 master-0 kubenswrapper[6980]: I0313 12:39:51.740136 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.740429 master-0 kubenswrapper[6980]: I0313 12:39:51.740331 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.740499 master-0 kubenswrapper[6980]: I0313 12:39:51.740433 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.740703 master-0 kubenswrapper[6980]: I0313 12:39:51.740594 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.741718 master-0 kubenswrapper[6980]: I0313 12:39:51.741515 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.760451 master-0 kubenswrapper[6980]: I0313 12:39:51.760386 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access\") pod \"installer-1-master-0\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:51.889290 master-0 kubenswrapper[6980]: I0313 12:39:51.889215 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:39:52.092777 master-0 kubenswrapper[6980]: I0313 12:39:52.092703 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:39:52.931227 master-0 kubenswrapper[6980]: I0313 12:39:52.930856 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:39:52.932329 master-0 kubenswrapper[6980]: I0313 12:39:52.932052 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:52.935400 master-0 kubenswrapper[6980]: I0313 12:39:52.934872 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:39:52.939733 master-0 kubenswrapper[6980]: I0313 12:39:52.939692 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:39:53.087837 master-0 kubenswrapper[6980]: I0313 12:39:53.087739 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.088127 master-0 kubenswrapper[6980]: I0313 12:39:53.087939 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.088127 master-0 kubenswrapper[6980]: I0313 12:39:53.087992 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.189039 master-0 kubenswrapper[6980]: I0313 12:39:53.188841 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.189039 master-0 kubenswrapper[6980]: I0313 12:39:53.188907 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.189039 master-0 kubenswrapper[6980]: I0313 12:39:53.189009 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.189349 master-0 kubenswrapper[6980]: I0313 12:39:53.189130 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.189349 master-0 kubenswrapper[6980]: I0313 12:39:53.189201 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.208561 master-0 kubenswrapper[6980]: I0313 12:39:53.208499 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.250462 master-0 kubenswrapper[6980]: W0313 12:39:53.250399 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod642c9e64_2d6f_4f0a_babf_8a54e0002415.slice/crio-9284574796f94098c86fc67adfe78da0327345414266aabaefa42affb7228984 WatchSource:0}: Error finding container 9284574796f94098c86fc67adfe78da0327345414266aabaefa42affb7228984: Status 404 returned error can't find the container with id 9284574796f94098c86fc67adfe78da0327345414266aabaefa42affb7228984 Mar 13 12:39:53.259303 master-0 kubenswrapper[6980]: I0313 12:39:53.259140 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:39:53.480818 master-0 kubenswrapper[6980]: I0313 12:39:53.480539 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 12:39:53.499120 master-0 kubenswrapper[6980]: W0313 12:39:53.499047 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod0feecf04_574d_4bf6_968d_77dd5c35260b.slice/crio-9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda WatchSource:0}: Error finding container 9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda: Status 404 returned error can't find the container with id 9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda Mar 13 12:39:53.642815 master-0 kubenswrapper[6980]: I0313 12:39:53.642489 6980 generic.go:334] "Generic (PLEG): container finished" podID="7574e950-de2e-4f90-99d0-eae3b45cd900" containerID="0e678d645097ba94b0c7601c15c6a37574e6aeb92f0646645ec0513c11a7f373" exitCode=0 Mar 13 12:39:53.643044 master-0 kubenswrapper[6980]: I0313 12:39:53.642844 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" event={"ID":"7574e950-de2e-4f90-99d0-eae3b45cd900","Type":"ContainerDied","Data":"0e678d645097ba94b0c7601c15c6a37574e6aeb92f0646645ec0513c11a7f373"} Mar 13 12:39:53.645063 master-0 kubenswrapper[6980]: I0313 12:39:53.645019 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"642c9e64-2d6f-4f0a-babf-8a54e0002415","Type":"ContainerStarted","Data":"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44"} Mar 13 12:39:53.645135 master-0 kubenswrapper[6980]: I0313 12:39:53.645064 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"642c9e64-2d6f-4f0a-babf-8a54e0002415","Type":"ContainerStarted","Data":"9284574796f94098c86fc67adfe78da0327345414266aabaefa42affb7228984"} Mar 13 12:39:53.646892 master-0 kubenswrapper[6980]: I0313 12:39:53.646854 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0feecf04-574d-4bf6-968d-77dd5c35260b","Type":"ContainerStarted","Data":"9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda"} Mar 13 12:39:53.686660 master-0 kubenswrapper[6980]: I0313 12:39:53.686487 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=2.686463831 podStartE2EDuration="2.686463831s" podCreationTimestamp="2026-03-13 12:39:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:53.683854621 +0000 UTC m=+61.017849247" watchObservedRunningTime="2026-03-13 12:39:53.686463831 +0000 UTC m=+61.020458457" Mar 13 12:39:54.651593 master-0 kubenswrapper[6980]: I0313 12:39:54.651243 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0feecf04-574d-4bf6-968d-77dd5c35260b","Type":"ContainerStarted","Data":"10be8f9ca4ea6e67dd279190add6bee9a3985f10e4ddcd7b2a1c5c6e9e6e6409"} Mar 13 12:39:54.654048 master-0 kubenswrapper[6980]: I0313 12:39:54.653992 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" event={"ID":"7574e950-de2e-4f90-99d0-eae3b45cd900","Type":"ContainerStarted","Data":"76a96df9566d989c98a8e6ec4db0b723db318e0e0b44be6098e4367114560c36"} Mar 13 12:39:54.654158 master-0 kubenswrapper[6980]: I0313 12:39:54.654061 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" event={"ID":"7574e950-de2e-4f90-99d0-eae3b45cd900","Type":"ContainerStarted","Data":"1d7b611917136b370e8f0480b4e2c129c61dc0bbda718db9a9d6eab1f747677d"} Mar 13 12:39:54.668266 master-0 kubenswrapper[6980]: I0313 12:39:54.668197 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=2.6681815479999997 podStartE2EDuration="2.668181548s" podCreationTimestamp="2026-03-13 12:39:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:39:54.667889329 +0000 UTC m=+62.001883955" watchObservedRunningTime="2026-03-13 12:39:54.668181548 +0000 UTC m=+62.002176174" Mar 13 12:39:54.691358 master-0 kubenswrapper[6980]: I0313 12:39:54.691292 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" podStartSLOduration=36.165540629 podStartE2EDuration="39.691271915s" podCreationTimestamp="2026-03-13 12:39:15 +0000 UTC" firstStartedPulling="2026-03-13 12:39:49.787543643 +0000 UTC m=+57.121538269" lastFinishedPulling="2026-03-13 12:39:53.313274919 +0000 UTC m=+60.647269555" observedRunningTime="2026-03-13 12:39:54.691013206 +0000 UTC m=+62.025007852" watchObservedRunningTime="2026-03-13 12:39:54.691271915 +0000 UTC m=+62.025266541" Mar 13 12:39:55.466600 master-0 kubenswrapper[6980]: I0313 12:39:55.466510 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") pod \"route-controller-manager-6c5bd84bb8-t9q8b\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:39:55.466990 master-0 kubenswrapper[6980]: E0313 12:39:55.466728 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:39:55.466990 master-0 kubenswrapper[6980]: E0313 12:39:55.466833 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert podName:96909d88-6a1b-4b24-854d-724c0d3f2ad9 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:03.466812304 +0000 UTC m=+70.800806930 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert") pod "route-controller-manager-6c5bd84bb8-t9q8b" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9") : secret "serving-cert" not found Mar 13 12:39:55.972135 master-0 kubenswrapper[6980]: I0313 12:39:55.972052 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:39:55.972810 master-0 kubenswrapper[6980]: E0313 12:39:55.972238 6980 secret.go:189] Couldn't get secret openshift-oauth-apiserver/serving-cert: secret "serving-cert" not found Mar 13 12:39:55.972810 master-0 kubenswrapper[6980]: E0313 12:39:55.972321 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert podName:0943b2db-9658-4a8d-89da-00779d55db6e nodeName:}" failed. No retries permitted until 2026-03-13 12:40:11.972302141 +0000 UTC m=+79.306296777 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert") pod "apiserver-6f6d949ddd-p9f8k" (UID: "0943b2db-9658-4a8d-89da-00779d55db6e") : secret "serving-cert" not found Mar 13 12:39:56.344356 master-0 kubenswrapper[6980]: I0313 12:39:56.344247 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:39:57.721416 master-0 kubenswrapper[6980]: I0313 12:39:57.721349 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:57.721416 master-0 kubenswrapper[6980]: I0313 12:39:57.721416 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721817 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721850 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721885 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721943 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721972 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.721993 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.722028 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.722174 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.722200 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: I0313 12:39:57.722224 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: E0313 12:39:57.722233 6980 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: E0313 12:39:57.722342 6980 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: E0313 12:39:57.722355 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs podName:59c9773d-7e88-4e30-9b8a-792a869a860e nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.722325235 +0000 UTC m=+129.056319861 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs") pod "network-metrics-daemon-ztpxf" (UID: "59c9773d-7e88-4e30-9b8a-792a869a860e") : secret "metrics-daemon-secret" not found Mar 13 12:39:57.722449 master-0 kubenswrapper[6980]: E0313 12:39:57.722388 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics podName:6e4e773c-d970-4f5e-9172-c1ebdb41888d nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.722374627 +0000 UTC m=+129.056369253 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-7wnld" (UID: "6e4e773c-d970-4f5e-9172-c1ebdb41888d") : secret "marketplace-operator-metrics" not found Mar 13 12:39:57.723595 master-0 kubenswrapper[6980]: I0313 12:39:57.722252 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:39:57.723595 master-0 kubenswrapper[6980]: E0313 12:39:57.723391 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 12:39:57.723595 master-0 kubenswrapper[6980]: E0313 12:39:57.723500 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert podName:2a5976df-0366-47b3-bc54-1ba7c249e87c nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.7234747 +0000 UTC m=+129.057469326 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert") pod "olm-operator-d64cfc9db-d8z4h" (UID: "2a5976df-0366-47b3-bc54-1ba7c249e87c") : secret "olm-operator-serving-cert" not found Mar 13 12:39:57.723760 master-0 kubenswrapper[6980]: E0313 12:39:57.723739 6980 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:57.723816 master-0 kubenswrapper[6980]: E0313 12:39:57.723782 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls podName:71b741d4-3899-4d31-afd1-72f5a9321f75 nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.723770559 +0000 UTC m=+129.057765265 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-4jlnk" (UID: "71b741d4-3899-4d31-afd1-72f5a9321f75") : secret "cluster-monitoring-operator-tls" not found Mar 13 12:39:57.724074 master-0 kubenswrapper[6980]: E0313 12:39:57.724050 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 12:39:57.724277 master-0 kubenswrapper[6980]: E0313 12:39:57.724257 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert podName:20217cff-2f81-4a56-9c15-28385c19258c nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.724187002 +0000 UTC m=+129.058181708 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-w8b7h" (UID: "20217cff-2f81-4a56-9c15-28385c19258c") : secret "package-server-manager-serving-cert" not found Mar 13 12:39:57.724371 master-0 kubenswrapper[6980]: E0313 12:39:57.724306 6980 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 12:39:57.724483 master-0 kubenswrapper[6980]: E0313 12:39:57.724470 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs podName:4f942fce-07a9-4377-8330-c6249a5a8b24 nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.72445534 +0000 UTC m=+129.058450046 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs") pod "multus-admission-controller-8d675b596-pbgd4" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24") : secret "multus-admission-controller-secret" not found Mar 13 12:39:57.724610 master-0 kubenswrapper[6980]: E0313 12:39:57.724395 6980 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 12:39:57.724725 master-0 kubenswrapper[6980]: E0313 12:39:57.724711 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert podName:8226ffac-1f76-4eaa-ada5-056b5fd031b4 nodeName:}" failed. No retries permitted until 2026-03-13 12:41:01.724698858 +0000 UTC m=+129.058693574 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert") pod "catalog-operator-7d9c49f57b-zxzfr" (UID: "8226ffac-1f76-4eaa-ada5-056b5fd031b4") : secret "catalog-operator-serving-cert" not found Mar 13 12:39:57.731603 master-0 kubenswrapper[6980]: I0313 12:39:57.731196 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:57.731603 master-0 kubenswrapper[6980]: I0313 12:39:57.731278 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:57.731857 master-0 kubenswrapper[6980]: I0313 12:39:57.731617 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:57.731857 master-0 kubenswrapper[6980]: I0313 12:39:57.731619 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:57.734245 master-0 kubenswrapper[6980]: I0313 12:39:57.732523 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"cluster-version-operator-745944c6b7-7rfrg\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:57.736392 master-0 kubenswrapper[6980]: I0313 12:39:57.736341 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:58.004363 master-0 kubenswrapper[6980]: I0313 12:39:58.003795 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:39:58.005419 master-0 kubenswrapper[6980]: I0313 12:39:58.005398 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:39:58.010435 master-0 kubenswrapper[6980]: I0313 12:39:58.010386 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:39:58.017941 master-0 kubenswrapper[6980]: I0313 12:39:58.017887 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:39:58.023520 master-0 kubenswrapper[6980]: I0313 12:39:58.023427 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:39:58.270273 master-0 kubenswrapper[6980]: I0313 12:39:58.270036 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-w7mv2"] Mar 13 12:39:58.276919 master-0 kubenswrapper[6980]: I0313 12:39:58.276857 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-9nxcz"] Mar 13 12:39:58.297390 master-0 kubenswrapper[6980]: I0313 12:39:58.295508 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf"] Mar 13 12:39:58.312105 master-0 kubenswrapper[6980]: I0313 12:39:58.312020 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f"] Mar 13 12:39:58.317551 master-0 kubenswrapper[6980]: W0313 12:39:58.314462 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5834a7c4_4e76_4fc7_a3ba_3ff99ee8f346.slice/crio-fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff WatchSource:0}: Error finding container fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff: Status 404 returned error can't find the container with id fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff Mar 13 12:39:58.320435 master-0 kubenswrapper[6980]: W0313 12:39:58.319300 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod16c2d774_967f_4964_ab4e_eb13c4364f63.slice/crio-1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0 WatchSource:0}: Error finding container 1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0: Status 404 returned error can't find the container with id 1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0 Mar 13 12:39:58.673799 master-0 kubenswrapper[6980]: I0313 12:39:58.673756 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" event={"ID":"16c2d774-967f-4964-ab4e-eb13c4364f63","Type":"ContainerStarted","Data":"1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0"} Mar 13 12:39:58.674829 master-0 kubenswrapper[6980]: I0313 12:39:58.674773 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" event={"ID":"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346","Type":"ContainerStarted","Data":"fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff"} Mar 13 12:39:58.675470 master-0 kubenswrapper[6980]: I0313 12:39:58.675435 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" event={"ID":"e3eb38e0-d8b5-46fc-809d-73791d569816","Type":"ContainerStarted","Data":"725739dce256aec84d1d35f08a2c0ef0a4d6fb2169686aeff14675d6012d989b"} Mar 13 12:39:58.676087 master-0 kubenswrapper[6980]: I0313 12:39:58.676050 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" event={"ID":"f85ab8ab-f9f1-47ad-9c96-9498cef92474","Type":"ContainerStarted","Data":"a7ea7f8a7c14a4770bc974d998f5bd5daace368d7b2428f8320ae10321a074ac"} Mar 13 12:39:58.676732 master-0 kubenswrapper[6980]: I0313 12:39:58.676697 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"4801a1906a7001eae337b963c9facf81446c4cb5eb428077e46f31714758e82d"} Mar 13 12:39:59.565912 master-0 kubenswrapper[6980]: I0313 12:39:59.565525 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:59.565912 master-0 kubenswrapper[6980]: I0313 12:39:59.565925 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:59.575774 master-0 kubenswrapper[6980]: I0313 12:39:59.575655 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:39:59.694085 master-0 kubenswrapper[6980]: I0313 12:39:59.694044 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:40:00.048108 master-0 kubenswrapper[6980]: I0313 12:40:00.047676 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b"] Mar 13 12:40:00.048108 master-0 kubenswrapper[6980]: E0313 12:40:00.048050 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" podUID="96909d88-6a1b-4b24-854d-724c0d3f2ad9" Mar 13 12:40:00.137607 master-0 kubenswrapper[6980]: I0313 12:40:00.135213 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:40:00.699711 master-0 kubenswrapper[6980]: I0313 12:40:00.695210 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:40:00.713736 master-0 kubenswrapper[6980]: I0313 12:40:00.713408 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:40:00.778051 master-0 kubenswrapper[6980]: I0313 12:40:00.777975 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config\") pod \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " Mar 13 12:40:00.778373 master-0 kubenswrapper[6980]: I0313 12:40:00.778073 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca\") pod \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " Mar 13 12:40:00.778373 master-0 kubenswrapper[6980]: I0313 12:40:00.778108 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9v7c\" (UniqueName: \"kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c\") pod \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\" (UID: \"96909d88-6a1b-4b24-854d-724c0d3f2ad9\") " Mar 13 12:40:00.778800 master-0 kubenswrapper[6980]: I0313 12:40:00.778754 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca" (OuterVolumeSpecName: "client-ca") pod "96909d88-6a1b-4b24-854d-724c0d3f2ad9" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:40:00.778871 master-0 kubenswrapper[6980]: I0313 12:40:00.778781 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config" (OuterVolumeSpecName: "config") pod "96909d88-6a1b-4b24-854d-724c0d3f2ad9" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:40:00.783498 master-0 kubenswrapper[6980]: I0313 12:40:00.783417 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c" (OuterVolumeSpecName: "kube-api-access-r9v7c") pod "96909d88-6a1b-4b24-854d-724c0d3f2ad9" (UID: "96909d88-6a1b-4b24-854d-724c0d3f2ad9"). InnerVolumeSpecName "kube-api-access-r9v7c". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:00.879255 master-0 kubenswrapper[6980]: I0313 12:40:00.879223 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:00.879255 master-0 kubenswrapper[6980]: I0313 12:40:00.879253 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/96909d88-6a1b-4b24-854d-724c0d3f2ad9-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:00.879418 master-0 kubenswrapper[6980]: I0313 12:40:00.879265 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9v7c\" (UniqueName: \"kubernetes.io/projected/96909d88-6a1b-4b24-854d-724c0d3f2ad9-kube-api-access-r9v7c\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:01.706481 master-0 kubenswrapper[6980]: I0313 12:40:01.704664 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b" Mar 13 12:40:01.739351 master-0 kubenswrapper[6980]: I0313 12:40:01.739222 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:01.740181 master-0 kubenswrapper[6980]: I0313 12:40:01.740154 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.742647 master-0 kubenswrapper[6980]: I0313 12:40:01.742347 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b"] Mar 13 12:40:01.743198 master-0 kubenswrapper[6980]: I0313 12:40:01.743156 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:40:01.743387 master-0 kubenswrapper[6980]: I0313 12:40:01.743347 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:40:01.743515 master-0 kubenswrapper[6980]: I0313 12:40:01.743490 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:40:01.744740 master-0 kubenswrapper[6980]: I0313 12:40:01.744712 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:40:01.744845 master-0 kubenswrapper[6980]: I0313 12:40:01.744748 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:40:01.751495 master-0 kubenswrapper[6980]: I0313 12:40:01.751441 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6c5bd84bb8-t9q8b"] Mar 13 12:40:01.752810 master-0 kubenswrapper[6980]: I0313 12:40:01.752766 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:01.790343 master-0 kubenswrapper[6980]: I0313 12:40:01.790071 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zzhjr\" (UniqueName: \"kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.790343 master-0 kubenswrapper[6980]: I0313 12:40:01.790352 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.790668 master-0 kubenswrapper[6980]: I0313 12:40:01.790415 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.790668 master-0 kubenswrapper[6980]: I0313 12:40:01.790444 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.790668 master-0 kubenswrapper[6980]: I0313 12:40:01.790621 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96909d88-6a1b-4b24-854d-724c0d3f2ad9-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:01.891838 master-0 kubenswrapper[6980]: I0313 12:40:01.891757 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zzhjr\" (UniqueName: \"kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.892075 master-0 kubenswrapper[6980]: I0313 12:40:01.891990 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.892135 master-0 kubenswrapper[6980]: I0313 12:40:01.892107 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.892169 master-0 kubenswrapper[6980]: I0313 12:40:01.892152 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.893369 master-0 kubenswrapper[6980]: E0313 12:40:01.893323 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:40:01.893423 master-0 kubenswrapper[6980]: E0313 12:40:01.893403 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert podName:4fef4d5a-4282-42ac-a21a-a66e8f5717e7 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:02.393386181 +0000 UTC m=+69.727380807 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert") pod "route-controller-manager-7d8f679f7b-jvx48" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7") : secret "serving-cert" not found Mar 13 12:40:01.893823 master-0 kubenswrapper[6980]: I0313 12:40:01.893777 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.894880 master-0 kubenswrapper[6980]: I0313 12:40:01.894733 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:01.913305 master-0 kubenswrapper[6980]: I0313 12:40:01.913254 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zzhjr\" (UniqueName: \"kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:02.399422 master-0 kubenswrapper[6980]: I0313 12:40:02.399315 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:02.399779 master-0 kubenswrapper[6980]: E0313 12:40:02.399749 6980 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 12:40:02.399929 master-0 kubenswrapper[6980]: E0313 12:40:02.399906 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert podName:4fef4d5a-4282-42ac-a21a-a66e8f5717e7 nodeName:}" failed. No retries permitted until 2026-03-13 12:40:03.399804186 +0000 UTC m=+70.733798812 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert") pod "route-controller-manager-7d8f679f7b-jvx48" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7") : secret "serving-cert" not found Mar 13 12:40:02.865394 master-0 kubenswrapper[6980]: I0313 12:40:02.865325 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96909d88-6a1b-4b24-854d-724c0d3f2ad9" path="/var/lib/kubelet/pods/96909d88-6a1b-4b24-854d-724c0d3f2ad9/volumes" Mar 13 12:40:03.444147 master-0 kubenswrapper[6980]: I0313 12:40:03.444051 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:03.448630 master-0 kubenswrapper[6980]: I0313 12:40:03.448561 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"route-controller-manager-7d8f679f7b-jvx48\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:03.571861 master-0 kubenswrapper[6980]: I0313 12:40:03.571740 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:05.727389 master-0 kubenswrapper[6980]: I0313 12:40:05.723932 6980 generic.go:334] "Generic (PLEG): container finished" podID="73dc5747-2d30-4a2d-a784-1dea1e10811d" containerID="d691dfff8d938f7ef898022014143d56dbbe1b4283d8d74c7b7938096f18aafe" exitCode=0 Mar 13 12:40:05.728410 master-0 kubenswrapper[6980]: I0313 12:40:05.724268 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerDied","Data":"d691dfff8d938f7ef898022014143d56dbbe1b4283d8d74c7b7938096f18aafe"} Mar 13 12:40:05.728410 master-0 kubenswrapper[6980]: I0313 12:40:05.727927 6980 scope.go:117] "RemoveContainer" containerID="d691dfff8d938f7ef898022014143d56dbbe1b4283d8d74c7b7938096f18aafe" Mar 13 12:40:05.773633 master-0 kubenswrapper[6980]: I0313 12:40:05.773446 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:05.800848 master-0 kubenswrapper[6980]: W0313 12:40:05.800732 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4fef4d5a_4282_42ac_a21a_a66e8f5717e7.slice/crio-baac284766b0d67a717c4e55c337dac5f49da23fbaa1606cc5fb9046dc8f6064 WatchSource:0}: Error finding container baac284766b0d67a717c4e55c337dac5f49da23fbaa1606cc5fb9046dc8f6064: Status 404 returned error can't find the container with id baac284766b0d67a717c4e55c337dac5f49da23fbaa1606cc5fb9046dc8f6064 Mar 13 12:40:06.118400 master-0 kubenswrapper[6980]: I0313 12:40:06.117998 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-d7h2t"] Mar 13 12:40:06.120374 master-0 kubenswrapper[6980]: I0313 12:40:06.120180 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214164 master-0 kubenswrapper[6980]: I0313 12:40:06.214091 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214164 master-0 kubenswrapper[6980]: I0313 12:40:06.214176 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64w7v\" (UniqueName: \"kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214213 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214230 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214251 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214296 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214322 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214343 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214367 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214397 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214420 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214449 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214467 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.214567 master-0 kubenswrapper[6980]: I0313 12:40:06.214493 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.315706 master-0 kubenswrapper[6980]: I0313 12:40:06.315486 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.315706 master-0 kubenswrapper[6980]: I0313 12:40:06.315547 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64w7v\" (UniqueName: \"kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.315706 master-0 kubenswrapper[6980]: I0313 12:40:06.315613 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.315706 master-0 kubenswrapper[6980]: I0313 12:40:06.315635 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.315706 master-0 kubenswrapper[6980]: I0313 12:40:06.315657 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316111 master-0 kubenswrapper[6980]: I0313 12:40:06.315904 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316111 master-0 kubenswrapper[6980]: I0313 12:40:06.316031 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316262 master-0 kubenswrapper[6980]: I0313 12:40:06.316225 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:40:06.316262 master-0 kubenswrapper[6980]: I0313 12:40:06.316246 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316404 master-0 kubenswrapper[6980]: I0313 12:40:06.316266 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316404 master-0 kubenswrapper[6980]: I0313 12:40:06.316279 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316404 master-0 kubenswrapper[6980]: I0313 12:40:06.316326 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316404 master-0 kubenswrapper[6980]: I0313 12:40:06.316372 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316404 master-0 kubenswrapper[6980]: I0313 12:40:06.316361 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316649 master-0 kubenswrapper[6980]: I0313 12:40:06.316434 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316649 master-0 kubenswrapper[6980]: I0313 12:40:06.316481 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316649 master-0 kubenswrapper[6980]: I0313 12:40:06.316486 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316649 master-0 kubenswrapper[6980]: I0313 12:40:06.316548 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" containerName="installer" containerID="cri-o://a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083" gracePeriod=30 Mar 13 12:40:06.316649 master-0 kubenswrapper[6980]: I0313 12:40:06.316640 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316894 master-0 kubenswrapper[6980]: I0313 12:40:06.316696 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316894 master-0 kubenswrapper[6980]: I0313 12:40:06.316414 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316894 master-0 kubenswrapper[6980]: I0313 12:40:06.316751 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.316894 master-0 kubenswrapper[6980]: I0313 12:40:06.316783 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.317066 master-0 kubenswrapper[6980]: I0313 12:40:06.316911 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.317066 master-0 kubenswrapper[6980]: I0313 12:40:06.316952 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.317162 master-0 kubenswrapper[6980]: I0313 12:40:06.317102 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.317162 master-0 kubenswrapper[6980]: I0313 12:40:06.317151 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.323296 master-0 kubenswrapper[6980]: I0313 12:40:06.323102 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.329216 master-0 kubenswrapper[6980]: I0313 12:40:06.323959 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.340570 master-0 kubenswrapper[6980]: I0313 12:40:06.339621 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64w7v\" (UniqueName: \"kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.536306 master-0 kubenswrapper[6980]: I0313 12:40:06.536186 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:40:06.562477 master-0 kubenswrapper[6980]: W0313 12:40:06.562080 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58581675_62f2_4564_9e12_bf34551b96ac.slice/crio-e38c320b040c0d8bce30dc1750c2d5c1af17aa2da97b586a20a8c637fcec9f0d WatchSource:0}: Error finding container e38c320b040c0d8bce30dc1750c2d5c1af17aa2da97b586a20a8c637fcec9f0d: Status 404 returned error can't find the container with id e38c320b040c0d8bce30dc1750c2d5c1af17aa2da97b586a20a8c637fcec9f0d Mar 13 12:40:06.744422 master-0 kubenswrapper[6980]: I0313 12:40:06.744346 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" event={"ID":"f85ab8ab-f9f1-47ad-9c96-9498cef92474","Type":"ContainerStarted","Data":"fc25e56d7fea43366639c3f1a0e91b8227c2ef3d5cc09c81efa1737a5754d594"} Mar 13 12:40:06.744422 master-0 kubenswrapper[6980]: I0313 12:40:06.744408 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" event={"ID":"f85ab8ab-f9f1-47ad-9c96-9498cef92474","Type":"ContainerStarted","Data":"bd80a4eb998204ba7e70f2b010ad8011c87c0bedcdabb9fb136c6e20838ce05b"} Mar 13 12:40:06.760440 master-0 kubenswrapper[6980]: I0313 12:40:06.757308 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerStarted","Data":"f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08"} Mar 13 12:40:06.764981 master-0 kubenswrapper[6980]: I0313 12:40:06.764774 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" event={"ID":"16c2d774-967f-4964-ab4e-eb13c4364f63","Type":"ContainerStarted","Data":"03adaefddde685072ec465ec3fa62e611b8564796fc923070952faebdeec68f6"} Mar 13 12:40:06.774108 master-0 kubenswrapper[6980]: I0313 12:40:06.773956 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"96b0c09be3127683aaf0a493f573f40941ec6f014275802788f91a4405651b7c"} Mar 13 12:40:06.774108 master-0 kubenswrapper[6980]: I0313 12:40:06.774026 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"5568c74bf78103146825d0653ed59a230ea4678a37b99c81a8ff3d46062174bd"} Mar 13 12:40:06.782054 master-0 kubenswrapper[6980]: I0313 12:40:06.781977 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" event={"ID":"4fef4d5a-4282-42ac-a21a-a66e8f5717e7","Type":"ContainerStarted","Data":"baac284766b0d67a717c4e55c337dac5f49da23fbaa1606cc5fb9046dc8f6064"} Mar 13 12:40:06.785643 master-0 kubenswrapper[6980]: I0313 12:40:06.783766 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" event={"ID":"e3eb38e0-d8b5-46fc-809d-73791d569816","Type":"ContainerStarted","Data":"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58"} Mar 13 12:40:06.791303 master-0 kubenswrapper[6980]: I0313 12:40:06.789978 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-qh2tf"] Mar 13 12:40:06.791303 master-0 kubenswrapper[6980]: I0313 12:40:06.790785 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:06.795627 master-0 kubenswrapper[6980]: I0313 12:40:06.793853 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:40:06.795627 master-0 kubenswrapper[6980]: I0313 12:40:06.793860 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:40:06.795627 master-0 kubenswrapper[6980]: I0313 12:40:06.794134 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:40:06.795627 master-0 kubenswrapper[6980]: I0313 12:40:06.794176 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:40:06.799618 master-0 kubenswrapper[6980]: I0313 12:40:06.796487 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" event={"ID":"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346","Type":"ContainerStarted","Data":"bac301547b48cdecb8c65de938d2eda1a0511b2e5a444761ea88edbc804c54a7"} Mar 13 12:40:06.799896 master-0 kubenswrapper[6980]: I0313 12:40:06.799799 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" event={"ID":"58581675-62f2-4564-9e12-bf34551b96ac","Type":"ContainerStarted","Data":"e38c320b040c0d8bce30dc1750c2d5c1af17aa2da97b586a20a8c637fcec9f0d"} Mar 13 12:40:06.814648 master-0 kubenswrapper[6980]: I0313 12:40:06.813393 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qh2tf"] Mar 13 12:40:06.960096 master-0 kubenswrapper[6980]: I0313 12:40:06.960006 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:06.960096 master-0 kubenswrapper[6980]: I0313 12:40:06.960048 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtpqk\" (UniqueName: \"kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:06.960373 master-0 kubenswrapper[6980]: I0313 12:40:06.960147 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:06.960373 master-0 kubenswrapper[6980]: I0313 12:40:06.960324 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" podStartSLOduration=0.960288332 podStartE2EDuration="960.288332ms" podCreationTimestamp="2026-03-13 12:40:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:06.946327804 +0000 UTC m=+74.280322430" watchObservedRunningTime="2026-03-13 12:40:06.960288332 +0000 UTC m=+74.294282958" Mar 13 12:40:07.065611 master-0 kubenswrapper[6980]: I0313 12:40:07.061348 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.065611 master-0 kubenswrapper[6980]: I0313 12:40:07.061420 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpqk\" (UniqueName: \"kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.065611 master-0 kubenswrapper[6980]: I0313 12:40:07.061481 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.065611 master-0 kubenswrapper[6980]: I0313 12:40:07.062658 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.083607 master-0 kubenswrapper[6980]: I0313 12:40:07.072902 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.114611 master-0 kubenswrapper[6980]: I0313 12:40:07.107224 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpqk\" (UniqueName: \"kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.114611 master-0 kubenswrapper[6980]: I0313 12:40:07.108181 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:07.327646 master-0 kubenswrapper[6980]: I0313 12:40:07.327089 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-5jth9"] Mar 13 12:40:07.328540 master-0 kubenswrapper[6980]: I0313 12:40:07.328500 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.526620 master-0 kubenswrapper[6980]: I0313 12:40:07.526471 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.532661 master-0 kubenswrapper[6980]: I0313 12:40:07.532537 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hflng\" (UniqueName: \"kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.563877 master-0 kubenswrapper[6980]: I0313 12:40:07.563437 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-qh2tf"] Mar 13 12:40:07.637623 master-0 kubenswrapper[6980]: I0313 12:40:07.634955 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.637623 master-0 kubenswrapper[6980]: I0313 12:40:07.635030 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflng\" (UniqueName: \"kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.637623 master-0 kubenswrapper[6980]: I0313 12:40:07.635626 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.673170 master-0 kubenswrapper[6980]: I0313 12:40:07.672867 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflng\" (UniqueName: \"kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.806280 master-0 kubenswrapper[6980]: I0313 12:40:07.806195 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qh2tf" event={"ID":"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5","Type":"ContainerStarted","Data":"d37419779a5a99d07a8431d2c7b74e48bacfbaba667a5ee5762a54d36c0f1cf1"} Mar 13 12:40:07.807744 master-0 kubenswrapper[6980]: I0313 12:40:07.807697 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" event={"ID":"58581675-62f2-4564-9e12-bf34551b96ac","Type":"ContainerStarted","Data":"2901c214d41a3d37561d06244c54e495338268627796dd5fccd3210e09776ebf"} Mar 13 12:40:07.956629 master-0 kubenswrapper[6980]: I0313 12:40:07.956558 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-5jth9" Mar 13 12:40:07.979183 master-0 kubenswrapper[6980]: W0313 12:40:07.978800 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf726d662_90e1_45b9_9bba_76a9c03faced.slice/crio-66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3 WatchSource:0}: Error finding container 66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3: Status 404 returned error can't find the container with id 66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3 Mar 13 12:40:08.820302 master-0 kubenswrapper[6980]: I0313 12:40:08.820240 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5jth9" event={"ID":"f726d662-90e1-45b9-9bba-76a9c03faced","Type":"ContainerStarted","Data":"767df9f5b2b1b4f4e99bd4f8838227170f8339114b0b6a190665df1ef81a33dc"} Mar 13 12:40:08.821160 master-0 kubenswrapper[6980]: I0313 12:40:08.820324 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-5jth9" event={"ID":"f726d662-90e1-45b9-9bba-76a9c03faced","Type":"ContainerStarted","Data":"66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3"} Mar 13 12:40:08.838961 master-0 kubenswrapper[6980]: I0313 12:40:08.838830 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-5jth9" podStartSLOduration=1.838785162 podStartE2EDuration="1.838785162s" podCreationTimestamp="2026-03-13 12:40:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:08.835275834 +0000 UTC m=+76.169270460" watchObservedRunningTime="2026-03-13 12:40:08.838785162 +0000 UTC m=+76.172779788" Mar 13 12:40:08.919630 master-0 kubenswrapper[6980]: I0313 12:40:08.919232 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:08.921917 master-0 kubenswrapper[6980]: I0313 12:40:08.921816 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.063180 master-0 kubenswrapper[6980]: I0313 12:40:09.058006 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:09.160331 master-0 kubenswrapper[6980]: I0313 12:40:09.160291 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.160460 master-0 kubenswrapper[6980]: I0313 12:40:09.160341 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.160460 master-0 kubenswrapper[6980]: I0313 12:40:09.160367 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.262791 master-0 kubenswrapper[6980]: I0313 12:40:09.262706 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.262791 master-0 kubenswrapper[6980]: I0313 12:40:09.262796 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.263208 master-0 kubenswrapper[6980]: I0313 12:40:09.262899 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.263208 master-0 kubenswrapper[6980]: I0313 12:40:09.262995 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.263208 master-0 kubenswrapper[6980]: I0313 12:40:09.263147 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.292515 master-0 kubenswrapper[6980]: I0313 12:40:09.292445 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access\") pod \"installer-3-master-0\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.384442 master-0 kubenswrapper[6980]: I0313 12:40:09.384397 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:09.831420 master-0 kubenswrapper[6980]: I0313 12:40:09.831369 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" event={"ID":"4fef4d5a-4282-42ac-a21a-a66e8f5717e7","Type":"ContainerStarted","Data":"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1"} Mar 13 12:40:09.831966 master-0 kubenswrapper[6980]: I0313 12:40:09.831861 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:09.835200 master-0 kubenswrapper[6980]: I0313 12:40:09.835140 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-b52x8_3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/openshift-controller-manager-operator/0.log" Mar 13 12:40:09.835350 master-0 kubenswrapper[6980]: I0313 12:40:09.835286 6980 generic.go:334] "Generic (PLEG): container finished" podID="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" containerID="d2c23685e01b04fc93d262aa5b6ebee8c573cd64c0296928ae13eaf96f993a18" exitCode=1 Mar 13 12:40:09.835350 master-0 kubenswrapper[6980]: I0313 12:40:09.835340 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerDied","Data":"d2c23685e01b04fc93d262aa5b6ebee8c573cd64c0296928ae13eaf96f993a18"} Mar 13 12:40:09.836135 master-0 kubenswrapper[6980]: I0313 12:40:09.836096 6980 scope.go:117] "RemoveContainer" containerID="d2c23685e01b04fc93d262aa5b6ebee8c573cd64c0296928ae13eaf96f993a18" Mar 13 12:40:09.875385 master-0 kubenswrapper[6980]: I0313 12:40:09.875002 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" podStartSLOduration=6.115255584 podStartE2EDuration="9.874930146s" podCreationTimestamp="2026-03-13 12:40:00 +0000 UTC" firstStartedPulling="2026-03-13 12:40:05.803720529 +0000 UTC m=+73.137715155" lastFinishedPulling="2026-03-13 12:40:09.563395091 +0000 UTC m=+76.897389717" observedRunningTime="2026-03-13 12:40:09.853212959 +0000 UTC m=+77.187207585" watchObservedRunningTime="2026-03-13 12:40:09.874930146 +0000 UTC m=+77.208924772" Mar 13 12:40:09.965911 master-0 kubenswrapper[6980]: I0313 12:40:09.965741 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:10.135954 master-0 kubenswrapper[6980]: I0313 12:40:10.135893 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:10.650459 master-0 kubenswrapper[6980]: W0313 12:40:10.650152 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod85ff9007_425d_4004_9a9d_25ac2b47761c.slice/crio-152fd23bd3a9a9af5b3bf1f5e4dc3d7d1f53f7b6b0961acf72fa544ca5a951fa WatchSource:0}: Error finding container 152fd23bd3a9a9af5b3bf1f5e4dc3d7d1f53f7b6b0961acf72fa544ca5a951fa: Status 404 returned error can't find the container with id 152fd23bd3a9a9af5b3bf1f5e4dc3d7d1f53f7b6b0961acf72fa544ca5a951fa Mar 13 12:40:10.714887 master-0 kubenswrapper[6980]: I0313 12:40:10.714144 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:40:10.714887 master-0 kubenswrapper[6980]: I0313 12:40:10.714296 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") pod \"controller-manager-669d874ccc-8rrvh\" (UID: \"425e18c5-3d11-4f04-be33-45fa3f035129\") " pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:40:10.714887 master-0 kubenswrapper[6980]: E0313 12:40:10.714521 6980 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: object "openshift-controller-manager"/"client-ca" not registered Mar 13 12:40:10.715208 master-0 kubenswrapper[6980]: E0313 12:40:10.714914 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:41:14.714880338 +0000 UTC m=+142.048874974 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : object "openshift-controller-manager"/"client-ca" not registered Mar 13 12:40:10.715282 master-0 kubenswrapper[6980]: E0313 12:40:10.715264 6980 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: object "openshift-controller-manager"/"serving-cert" not registered Mar 13 12:40:10.715335 master-0 kubenswrapper[6980]: E0313 12:40:10.715301 6980 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert podName:425e18c5-3d11-4f04-be33-45fa3f035129 nodeName:}" failed. No retries permitted until 2026-03-13 12:41:14.715292741 +0000 UTC m=+142.049287367 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert") pod "controller-manager-669d874ccc-8rrvh" (UID: "425e18c5-3d11-4f04-be33-45fa3f035129") : object "openshift-controller-manager"/"serving-cert" not registered Mar 13 12:40:10.855915 master-0 kubenswrapper[6980]: I0313 12:40:10.855851 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"85ff9007-425d-4004-9a9d-25ac2b47761c","Type":"ContainerStarted","Data":"152fd23bd3a9a9af5b3bf1f5e4dc3d7d1f53f7b6b0961acf72fa544ca5a951fa"} Mar 13 12:40:10.862275 master-0 kubenswrapper[6980]: I0313 12:40:10.862233 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-b52x8_3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/openshift-controller-manager-operator/0.log" Mar 13 12:40:10.868184 master-0 kubenswrapper[6980]: I0313 12:40:10.868115 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerStarted","Data":"9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b"} Mar 13 12:40:11.350272 master-0 kubenswrapper[6980]: I0313 12:40:11.350224 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:40:11.350844 master-0 kubenswrapper[6980]: I0313 12:40:11.350780 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="642c9e64-2d6f-4f0a-babf-8a54e0002415" containerName="installer" containerID="cri-o://6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44" gracePeriod=30 Mar 13 12:40:11.468670 master-0 kubenswrapper[6980]: I0313 12:40:11.467390 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:11.892380 master-0 kubenswrapper[6980]: I0313 12:40:11.892298 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"85ff9007-425d-4004-9a9d-25ac2b47761c","Type":"ContainerStarted","Data":"f3b04290edf952f0f53372e1b248559907410429206c66430ddadabfa3f41959"} Mar 13 12:40:11.895498 master-0 kubenswrapper[6980]: I0313 12:40:11.895421 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qh2tf" event={"ID":"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5","Type":"ContainerStarted","Data":"5b4d69cf462b38f623e105c421f457d875bc075143f1a53fbfbbbc17bfafbca1"} Mar 13 12:40:11.895498 master-0 kubenswrapper[6980]: I0313 12:40:11.895464 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-qh2tf" event={"ID":"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5","Type":"ContainerStarted","Data":"c69395d21b711c719462fd565c991444dba33be0e2a7712829cf7e15e1a3879e"} Mar 13 12:40:11.914790 master-0 kubenswrapper[6980]: I0313 12:40:11.914137 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.9141060579999998 podStartE2EDuration="3.914106058s" podCreationTimestamp="2026-03-13 12:40:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:11.910740745 +0000 UTC m=+79.244735371" watchObservedRunningTime="2026-03-13 12:40:11.914106058 +0000 UTC m=+79.248100704" Mar 13 12:40:11.936102 master-0 kubenswrapper[6980]: I0313 12:40:11.936020 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-qh2tf" podStartSLOduration=2.820932086 podStartE2EDuration="5.936001469s" podCreationTimestamp="2026-03-13 12:40:06 +0000 UTC" firstStartedPulling="2026-03-13 12:40:07.594872863 +0000 UTC m=+74.928867489" lastFinishedPulling="2026-03-13 12:40:10.709942246 +0000 UTC m=+78.043936872" observedRunningTime="2026-03-13 12:40:11.93408889 +0000 UTC m=+79.268083526" watchObservedRunningTime="2026-03-13 12:40:11.936001469 +0000 UTC m=+79.269996085" Mar 13 12:40:12.034847 master-0 kubenswrapper[6980]: I0313 12:40:12.034764 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:12.039590 master-0 kubenswrapper[6980]: I0313 12:40:12.039529 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:12.187453 master-0 kubenswrapper[6980]: I0313 12:40:12.187275 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:12.631671 master-0 kubenswrapper[6980]: I0313 12:40:12.631495 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k"] Mar 13 12:40:12.902718 master-0 kubenswrapper[6980]: I0313 12:40:12.902650 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" event={"ID":"0943b2db-9658-4a8d-89da-00779d55db6e","Type":"ContainerStarted","Data":"b833b4c44fc7671aadf2bbf7695850b67cef941ee23693e9e8acaa00999b3a13"} Mar 13 12:40:12.903954 master-0 kubenswrapper[6980]: I0313 12:40:12.903927 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:12.904107 master-0 kubenswrapper[6980]: I0313 12:40:12.902989 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" podUID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" containerName="route-controller-manager" containerID="cri-o://587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1" gracePeriod=30 Mar 13 12:40:13.277605 master-0 kubenswrapper[6980]: I0313 12:40:13.277180 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:13.307424 master-0 kubenswrapper[6980]: I0313 12:40:13.307361 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:40:13.309054 master-0 kubenswrapper[6980]: E0313 12:40:13.308985 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" containerName="route-controller-manager" Mar 13 12:40:13.309054 master-0 kubenswrapper[6980]: I0313 12:40:13.309049 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" containerName="route-controller-manager" Mar 13 12:40:13.314286 master-0 kubenswrapper[6980]: I0313 12:40:13.314236 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" containerName="route-controller-manager" Mar 13 12:40:13.314894 master-0 kubenswrapper[6980]: I0313 12:40:13.314859 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:40:13.314987 master-0 kubenswrapper[6980]: I0313 12:40:13.314970 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.353476 master-0 kubenswrapper[6980]: I0313 12:40:13.353427 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zzhjr\" (UniqueName: \"kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr\") pod \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " Mar 13 12:40:13.353701 master-0 kubenswrapper[6980]: I0313 12:40:13.353523 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca\") pod \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " Mar 13 12:40:13.354626 master-0 kubenswrapper[6980]: I0313 12:40:13.354067 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config\") pod \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " Mar 13 12:40:13.354689 master-0 kubenswrapper[6980]: I0313 12:40:13.354648 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") pod \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\" (UID: \"4fef4d5a-4282-42ac-a21a-a66e8f5717e7\") " Mar 13 12:40:13.355515 master-0 kubenswrapper[6980]: I0313 12:40:13.355074 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.355933 master-0 kubenswrapper[6980]: I0313 12:40:13.355897 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.355933 master-0 kubenswrapper[6980]: I0313 12:40:13.355293 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca" (OuterVolumeSpecName: "client-ca") pod "4fef4d5a-4282-42ac-a21a-a66e8f5717e7" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:40:13.356044 master-0 kubenswrapper[6980]: I0313 12:40:13.355940 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.356044 master-0 kubenswrapper[6980]: I0313 12:40:13.355624 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config" (OuterVolumeSpecName: "config") pod "4fef4d5a-4282-42ac-a21a-a66e8f5717e7" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:40:13.356044 master-0 kubenswrapper[6980]: I0313 12:40:13.356012 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.356548 master-0 kubenswrapper[6980]: I0313 12:40:13.356499 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr" (OuterVolumeSpecName: "kube-api-access-zzhjr") pod "4fef4d5a-4282-42ac-a21a-a66e8f5717e7" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7"). InnerVolumeSpecName "kube-api-access-zzhjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:13.359011 master-0 kubenswrapper[6980]: I0313 12:40:13.358958 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4fef4d5a-4282-42ac-a21a-a66e8f5717e7" (UID: "4fef4d5a-4282-42ac-a21a-a66e8f5717e7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:40:13.369687 master-0 kubenswrapper[6980]: I0313 12:40:13.369646 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:13.470869 master-0 kubenswrapper[6980]: I0313 12:40:13.470772 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.471119 master-0 kubenswrapper[6980]: I0313 12:40:13.471087 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.471534 master-0 kubenswrapper[6980]: I0313 12:40:13.471182 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.471534 master-0 kubenswrapper[6980]: I0313 12:40:13.471478 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.471881 master-0 kubenswrapper[6980]: I0313 12:40:13.471856 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:13.471944 master-0 kubenswrapper[6980]: I0313 12:40:13.471885 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:13.471944 master-0 kubenswrapper[6980]: I0313 12:40:13.471905 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zzhjr\" (UniqueName: \"kubernetes.io/projected/4fef4d5a-4282-42ac-a21a-a66e8f5717e7-kube-api-access-zzhjr\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:13.472526 master-0 kubenswrapper[6980]: I0313 12:40:13.472488 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.473003 master-0 kubenswrapper[6980]: I0313 12:40:13.472964 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.474637 master-0 kubenswrapper[6980]: I0313 12:40:13.474600 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.498401 master-0 kubenswrapper[6980]: I0313 12:40:13.498119 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.635901 master-0 kubenswrapper[6980]: I0313 12:40:13.635274 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:13.911307 master-0 kubenswrapper[6980]: I0313 12:40:13.910636 6980 generic.go:334] "Generic (PLEG): container finished" podID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" containerID="587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1" exitCode=0 Mar 13 12:40:13.911307 master-0 kubenswrapper[6980]: I0313 12:40:13.910708 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" Mar 13 12:40:13.911307 master-0 kubenswrapper[6980]: I0313 12:40:13.910717 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" event={"ID":"4fef4d5a-4282-42ac-a21a-a66e8f5717e7","Type":"ContainerDied","Data":"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1"} Mar 13 12:40:13.911307 master-0 kubenswrapper[6980]: I0313 12:40:13.910768 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48" event={"ID":"4fef4d5a-4282-42ac-a21a-a66e8f5717e7","Type":"ContainerDied","Data":"baac284766b0d67a717c4e55c337dac5f49da23fbaa1606cc5fb9046dc8f6064"} Mar 13 12:40:13.911307 master-0 kubenswrapper[6980]: I0313 12:40:13.910823 6980 scope.go:117] "RemoveContainer" containerID="587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1" Mar 13 12:40:13.926482 master-0 kubenswrapper[6980]: I0313 12:40:13.926451 6980 scope.go:117] "RemoveContainer" containerID="587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1" Mar 13 12:40:13.927082 master-0 kubenswrapper[6980]: E0313 12:40:13.927044 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1\": container with ID starting with 587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1 not found: ID does not exist" containerID="587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1" Mar 13 12:40:13.927239 master-0 kubenswrapper[6980]: I0313 12:40:13.927102 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1"} err="failed to get container status \"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1\": rpc error: code = NotFound desc = could not find container \"587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1\": container with ID starting with 587e3b182ceefa80191bb82ff16d402495c5f92460f48d34dd1b07cef819e5e1 not found: ID does not exist" Mar 13 12:40:13.954792 master-0 kubenswrapper[6980]: I0313 12:40:13.954719 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:13.958302 master-0 kubenswrapper[6980]: I0313 12:40:13.958247 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7d8f679f7b-jvx48"] Mar 13 12:40:14.111376 master-0 kubenswrapper[6980]: I0313 12:40:14.111332 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:40:14.120689 master-0 kubenswrapper[6980]: W0313 12:40:14.120632 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef1dbe95_a46f_4d09_87b0_f51429f2d82c.slice/crio-1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96 WatchSource:0}: Error finding container 1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96: Status 404 returned error can't find the container with id 1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96 Mar 13 12:40:14.151134 master-0 kubenswrapper[6980]: I0313 12:40:14.151082 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:40:14.151685 master-0 kubenswrapper[6980]: I0313 12:40:14.151664 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.165473 master-0 kubenswrapper[6980]: I0313 12:40:14.165431 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:40:14.293044 master-0 kubenswrapper[6980]: I0313 12:40:14.292984 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.293230 master-0 kubenswrapper[6980]: I0313 12:40:14.293117 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.293728 master-0 kubenswrapper[6980]: I0313 12:40:14.293683 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.395069 master-0 kubenswrapper[6980]: I0313 12:40:14.395001 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.395304 master-0 kubenswrapper[6980]: I0313 12:40:14.395259 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.395304 master-0 kubenswrapper[6980]: I0313 12:40:14.395302 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.395408 master-0 kubenswrapper[6980]: I0313 12:40:14.395395 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.395476 master-0 kubenswrapper[6980]: I0313 12:40:14.395437 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.413719 master-0 kubenswrapper[6980]: I0313 12:40:14.413466 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access\") pod \"installer-2-master-0\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.473769 master-0 kubenswrapper[6980]: I0313 12:40:14.473663 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:40:14.868747 master-0 kubenswrapper[6980]: I0313 12:40:14.868562 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fef4d5a-4282-42ac-a21a-a66e8f5717e7" path="/var/lib/kubelet/pods/4fef4d5a-4282-42ac-a21a-a66e8f5717e7/volumes" Mar 13 12:40:14.918170 master-0 kubenswrapper[6980]: I0313 12:40:14.918045 6980 generic.go:334] "Generic (PLEG): container finished" podID="b2ad4825-17fa-4ddd-b21e-334158f1c048" containerID="a9dd7732800ec2cf2ba2657ee89d490d35d4ed3ca8ea35ffd325cd650a57aa03" exitCode=0 Mar 13 12:40:14.918170 master-0 kubenswrapper[6980]: I0313 12:40:14.918102 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" event={"ID":"b2ad4825-17fa-4ddd-b21e-334158f1c048","Type":"ContainerDied","Data":"a9dd7732800ec2cf2ba2657ee89d490d35d4ed3ca8ea35ffd325cd650a57aa03"} Mar 13 12:40:14.918759 master-0 kubenswrapper[6980]: I0313 12:40:14.918515 6980 scope.go:117] "RemoveContainer" containerID="a9dd7732800ec2cf2ba2657ee89d490d35d4ed3ca8ea35ffd325cd650a57aa03" Mar 13 12:40:14.921207 master-0 kubenswrapper[6980]: I0313 12:40:14.921133 6980 generic.go:334] "Generic (PLEG): container finished" podID="6e55908e-59f3-45a2-82aa-2616c5a2fd52" containerID="7cea7ef63e0a2bbd7a51a61ea7823a56840343f0d56d2b827f3841e4907fb6b2" exitCode=0 Mar 13 12:40:14.921465 master-0 kubenswrapper[6980]: I0313 12:40:14.921270 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" event={"ID":"6e55908e-59f3-45a2-82aa-2616c5a2fd52","Type":"ContainerDied","Data":"7cea7ef63e0a2bbd7a51a61ea7823a56840343f0d56d2b827f3841e4907fb6b2"} Mar 13 12:40:14.922169 master-0 kubenswrapper[6980]: I0313 12:40:14.921879 6980 scope.go:117] "RemoveContainer" containerID="7cea7ef63e0a2bbd7a51a61ea7823a56840343f0d56d2b827f3841e4907fb6b2" Mar 13 12:40:14.923428 master-0 kubenswrapper[6980]: I0313 12:40:14.923399 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" event={"ID":"ef1dbe95-a46f-4d09-87b0-f51429f2d82c","Type":"ContainerStarted","Data":"278d68915cc7294ac01aa5d48357a22b6b3777b90445159f08c7639fb945a121"} Mar 13 12:40:14.923513 master-0 kubenswrapper[6980]: I0313 12:40:14.923434 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" event={"ID":"ef1dbe95-a46f-4d09-87b0-f51429f2d82c","Type":"ContainerStarted","Data":"1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96"} Mar 13 12:40:14.924224 master-0 kubenswrapper[6980]: I0313 12:40:14.924204 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:15.241085 master-0 kubenswrapper[6980]: I0313 12:40:15.241026 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" podStartSLOduration=4.241005422 podStartE2EDuration="4.241005422s" podCreationTimestamp="2026-03-13 12:40:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:15.23341695 +0000 UTC m=+82.567411586" watchObservedRunningTime="2026-03-13 12:40:15.241005422 +0000 UTC m=+82.575000048" Mar 13 12:40:15.285567 master-0 kubenswrapper[6980]: I0313 12:40:15.285517 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:40:15.803827 master-0 kubenswrapper[6980]: I0313 12:40:15.803749 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 12:40:15.814847 master-0 kubenswrapper[6980]: W0313 12:40:15.814792 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podaae10aa9_9c7d_4319_9829_e900af7df301.slice/crio-90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270 WatchSource:0}: Error finding container 90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270: Status 404 returned error can't find the container with id 90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270 Mar 13 12:40:15.938766 master-0 kubenswrapper[6980]: I0313 12:40:15.938704 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"aae10aa9-9c7d-4319-9829-e900af7df301","Type":"ContainerStarted","Data":"90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270"} Mar 13 12:40:15.945636 master-0 kubenswrapper[6980]: I0313 12:40:15.944204 6980 generic.go:334] "Generic (PLEG): container finished" podID="0943b2db-9658-4a8d-89da-00779d55db6e" containerID="882be8390f3c93b88f969b0da9f7aac073082985655733e890261d7e7b41c713" exitCode=0 Mar 13 12:40:15.945636 master-0 kubenswrapper[6980]: I0313 12:40:15.944328 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" event={"ID":"0943b2db-9658-4a8d-89da-00779d55db6e","Type":"ContainerDied","Data":"882be8390f3c93b88f969b0da9f7aac073082985655733e890261d7e7b41c713"} Mar 13 12:40:15.947378 master-0 kubenswrapper[6980]: I0313 12:40:15.947056 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" event={"ID":"b2ad4825-17fa-4ddd-b21e-334158f1c048","Type":"ContainerStarted","Data":"af5bd031c3ba8e5558a089e89f1586bb1851a5a957c2a432b994c717835c02ab"} Mar 13 12:40:15.956744 master-0 kubenswrapper[6980]: I0313 12:40:15.952406 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" event={"ID":"6e55908e-59f3-45a2-82aa-2616c5a2fd52","Type":"ContainerStarted","Data":"51effc8609085787f430cca7b9ad72523aa8d64c91de04562b54b7c24c9eac4e"} Mar 13 12:40:16.394078 master-0 kubenswrapper[6980]: I0313 12:40:16.393668 6980 pod_container_manager_linux.go:210] "Failed to delete cgroup paths" cgroupName=["kubepods","burstable","pod425e18c5-3d11-4f04-be33-45fa3f035129"] err="unable to destroy cgroup paths for cgroup [kubepods burstable pod425e18c5-3d11-4f04-be33-45fa3f035129] : Timed out while waiting for systemd to remove kubepods-burstable-pod425e18c5_3d11_4f04_be33_45fa3f035129.slice" Mar 13 12:40:16.394078 master-0 kubenswrapper[6980]: E0313 12:40:16.393716 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to delete cgroup paths for [kubepods burstable pod425e18c5-3d11-4f04-be33-45fa3f035129] : unable to destroy cgroup paths for cgroup [kubepods burstable pod425e18c5-3d11-4f04-be33-45fa3f035129] : Timed out while waiting for systemd to remove kubepods-burstable-pod425e18c5_3d11_4f04_be33_45fa3f035129.slice" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" podUID="425e18c5-3d11-4f04-be33-45fa3f035129" Mar 13 12:40:16.716828 master-0 kubenswrapper[6980]: I0313 12:40:16.715977 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:16.716828 master-0 kubenswrapper[6980]: I0313 12:40:16.716204 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="85ff9007-425d-4004-9a9d-25ac2b47761c" containerName="installer" containerID="cri-o://f3b04290edf952f0f53372e1b248559907410429206c66430ddadabfa3f41959" gracePeriod=30 Mar 13 12:40:16.972279 master-0 kubenswrapper[6980]: I0313 12:40:16.972113 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" event={"ID":"0943b2db-9658-4a8d-89da-00779d55db6e","Type":"ContainerStarted","Data":"99157da85801470232747defd36ef5897fda4eb1bccf9ffb6147e591cfd1e90b"} Mar 13 12:40:16.975852 master-0 kubenswrapper[6980]: I0313 12:40:16.975817 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_85ff9007-425d-4004-9a9d-25ac2b47761c/installer/0.log" Mar 13 12:40:16.975949 master-0 kubenswrapper[6980]: I0313 12:40:16.975883 6980 generic.go:334] "Generic (PLEG): container finished" podID="85ff9007-425d-4004-9a9d-25ac2b47761c" containerID="f3b04290edf952f0f53372e1b248559907410429206c66430ddadabfa3f41959" exitCode=1 Mar 13 12:40:16.975997 master-0 kubenswrapper[6980]: I0313 12:40:16.975945 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"85ff9007-425d-4004-9a9d-25ac2b47761c","Type":"ContainerDied","Data":"f3b04290edf952f0f53372e1b248559907410429206c66430ddadabfa3f41959"} Mar 13 12:40:16.985837 master-0 kubenswrapper[6980]: I0313 12:40:16.985714 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"aae10aa9-9c7d-4319-9829-e900af7df301","Type":"ContainerStarted","Data":"4bc1f7c933d28f40b13d28985334ae240170d114b669b057cc93fee9fb9f7a73"} Mar 13 12:40:16.985837 master-0 kubenswrapper[6980]: I0313 12:40:16.985790 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-669d874ccc-8rrvh" Mar 13 12:40:16.993448 master-0 kubenswrapper[6980]: I0313 12:40:16.993365 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" podStartSLOduration=35.212814692 podStartE2EDuration="37.993344057s" podCreationTimestamp="2026-03-13 12:39:39 +0000 UTC" firstStartedPulling="2026-03-13 12:40:12.642973168 +0000 UTC m=+79.976967794" lastFinishedPulling="2026-03-13 12:40:15.423502533 +0000 UTC m=+82.757497159" observedRunningTime="2026-03-13 12:40:16.993003236 +0000 UTC m=+84.326997862" watchObservedRunningTime="2026-03-13 12:40:16.993344057 +0000 UTC m=+84.327338683" Mar 13 12:40:17.010384 master-0 kubenswrapper[6980]: I0313 12:40:17.010266 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=3.010237634 podStartE2EDuration="3.010237634s" podCreationTimestamp="2026-03-13 12:40:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:40:17.008232353 +0000 UTC m=+84.342226969" watchObservedRunningTime="2026-03-13 12:40:17.010237634 +0000 UTC m=+84.344232260" Mar 13 12:40:17.042255 master-0 kubenswrapper[6980]: I0313 12:40:17.042198 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:40:17.043669 master-0 kubenswrapper[6980]: I0313 12:40:17.043099 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.048568 master-0 kubenswrapper[6980]: I0313 12:40:17.048513 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:40:17.048737 master-0 kubenswrapper[6980]: I0313 12:40:17.048596 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:40:17.049188 master-0 kubenswrapper[6980]: I0313 12:40:17.048990 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:40:17.049333 master-0 kubenswrapper[6980]: I0313 12:40:17.049189 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:40:17.050376 master-0 kubenswrapper[6980]: I0313 12:40:17.050275 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:40:17.061750 master-0 kubenswrapper[6980]: I0313 12:40:17.053724 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-669d874ccc-8rrvh"] Mar 13 12:40:17.061750 master-0 kubenswrapper[6980]: I0313 12:40:17.054193 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-669d874ccc-8rrvh"] Mar 13 12:40:17.061750 master-0 kubenswrapper[6980]: I0313 12:40:17.058527 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:40:17.072049 master-0 kubenswrapper[6980]: I0313 12:40:17.072002 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:40:17.103881 master-0 kubenswrapper[6980]: I0313 12:40:17.103734 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.104498 master-0 kubenswrapper[6980]: I0313 12:40:17.103941 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.104498 master-0 kubenswrapper[6980]: I0313 12:40:17.104101 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.104498 master-0 kubenswrapper[6980]: I0313 12:40:17.104171 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.111670 master-0 kubenswrapper[6980]: I0313 12:40:17.106674 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.111670 master-0 kubenswrapper[6980]: I0313 12:40:17.106870 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/425e18c5-3d11-4f04-be33-45fa3f035129-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:17.111670 master-0 kubenswrapper[6980]: I0313 12:40:17.106894 6980 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/425e18c5-3d11-4f04-be33-45fa3f035129-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:17.118726 master-0 kubenswrapper[6980]: I0313 12:40:17.118687 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_85ff9007-425d-4004-9a9d-25ac2b47761c/installer/0.log" Mar 13 12:40:17.118856 master-0 kubenswrapper[6980]: I0313 12:40:17.118789 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:17.222018 master-0 kubenswrapper[6980]: I0313 12:40:17.221890 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.222018 master-0 kubenswrapper[6980]: I0313 12:40:17.222006 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.222497 master-0 kubenswrapper[6980]: I0313 12:40:17.222060 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.222497 master-0 kubenswrapper[6980]: I0313 12:40:17.222130 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.222497 master-0 kubenswrapper[6980]: I0313 12:40:17.222167 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.222497 master-0 kubenswrapper[6980]: I0313 12:40:17.222236 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:17.222497 master-0 kubenswrapper[6980]: I0313 12:40:17.222326 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:17.223503 master-0 kubenswrapper[6980]: I0313 12:40:17.223458 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.223931 master-0 kubenswrapper[6980]: I0313 12:40:17.223893 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.224135 master-0 kubenswrapper[6980]: I0313 12:40:17.224086 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.230733 master-0 kubenswrapper[6980]: I0313 12:40:17.230656 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.260670 master-0 kubenswrapper[6980]: I0313 12:40:17.258976 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.296986 master-0 kubenswrapper[6980]: I0313 12:40:17.296920 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:17.322727 master-0 kubenswrapper[6980]: I0313 12:40:17.322680 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock\") pod \"85ff9007-425d-4004-9a9d-25ac2b47761c\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " Mar 13 12:40:17.323058 master-0 kubenswrapper[6980]: I0313 12:40:17.322764 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access\") pod \"85ff9007-425d-4004-9a9d-25ac2b47761c\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " Mar 13 12:40:17.323058 master-0 kubenswrapper[6980]: I0313 12:40:17.322810 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir\") pod \"85ff9007-425d-4004-9a9d-25ac2b47761c\" (UID: \"85ff9007-425d-4004-9a9d-25ac2b47761c\") " Mar 13 12:40:17.323144 master-0 kubenswrapper[6980]: I0313 12:40:17.323105 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "85ff9007-425d-4004-9a9d-25ac2b47761c" (UID: "85ff9007-425d-4004-9a9d-25ac2b47761c"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:17.323175 master-0 kubenswrapper[6980]: I0313 12:40:17.323151 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock" (OuterVolumeSpecName: "var-lock") pod "85ff9007-425d-4004-9a9d-25ac2b47761c" (UID: "85ff9007-425d-4004-9a9d-25ac2b47761c"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:17.325941 master-0 kubenswrapper[6980]: I0313 12:40:17.325916 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "85ff9007-425d-4004-9a9d-25ac2b47761c" (UID: "85ff9007-425d-4004-9a9d-25ac2b47761c"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:17.371385 master-0 kubenswrapper[6980]: I0313 12:40:17.371312 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:17.424998 master-0 kubenswrapper[6980]: I0313 12:40:17.424714 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:17.425181 master-0 kubenswrapper[6980]: I0313 12:40:17.425005 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/85ff9007-425d-4004-9a9d-25ac2b47761c-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:17.425181 master-0 kubenswrapper[6980]: I0313 12:40:17.425032 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/85ff9007-425d-4004-9a9d-25ac2b47761c-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:17.755293 master-0 kubenswrapper[6980]: I0313 12:40:17.755153 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:40:17.760758 master-0 kubenswrapper[6980]: W0313 12:40:17.760704 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7343df96_cba2_477b_8a1b_7af369620440.slice/crio-5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53 WatchSource:0}: Error finding container 5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53: Status 404 returned error can't find the container with id 5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53 Mar 13 12:40:17.992106 master-0 kubenswrapper[6980]: I0313 12:40:17.992028 6980 generic.go:334] "Generic (PLEG): container finished" podID="684c9067-189a-4f50-ac8d-97111aa73d9c" containerID="59914b16ce26e359fa0f8c879d562000e5c33058f6a9e4b5ad9002af5b9b5469" exitCode=0 Mar 13 12:40:17.992714 master-0 kubenswrapper[6980]: I0313 12:40:17.992134 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerDied","Data":"59914b16ce26e359fa0f8c879d562000e5c33058f6a9e4b5ad9002af5b9b5469"} Mar 13 12:40:17.992800 master-0 kubenswrapper[6980]: I0313 12:40:17.992768 6980 scope.go:117] "RemoveContainer" containerID="59914b16ce26e359fa0f8c879d562000e5c33058f6a9e4b5ad9002af5b9b5469" Mar 13 12:40:17.995938 master-0 kubenswrapper[6980]: I0313 12:40:17.994820 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_85ff9007-425d-4004-9a9d-25ac2b47761c/installer/0.log" Mar 13 12:40:17.995938 master-0 kubenswrapper[6980]: I0313 12:40:17.994900 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"85ff9007-425d-4004-9a9d-25ac2b47761c","Type":"ContainerDied","Data":"152fd23bd3a9a9af5b3bf1f5e4dc3d7d1f53f7b6b0961acf72fa544ca5a951fa"} Mar 13 12:40:17.995938 master-0 kubenswrapper[6980]: I0313 12:40:17.994926 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 12:40:17.995938 master-0 kubenswrapper[6980]: I0313 12:40:17.994940 6980 scope.go:117] "RemoveContainer" containerID="f3b04290edf952f0f53372e1b248559907410429206c66430ddadabfa3f41959" Mar 13 12:40:18.000691 master-0 kubenswrapper[6980]: I0313 12:40:18.000624 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerStarted","Data":"5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53"} Mar 13 12:40:18.008872 master-0 kubenswrapper[6980]: I0313 12:40:18.008757 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:40:18.072885 master-0 kubenswrapper[6980]: I0313 12:40:18.068772 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:18.080842 master-0 kubenswrapper[6980]: I0313 12:40:18.074940 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 12:40:18.100660 master-0 kubenswrapper[6980]: I0313 12:40:18.100596 6980 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:40:18.100947 master-0 kubenswrapper[6980]: I0313 12:40:18.100888 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d" gracePeriod=30 Mar 13 12:40:18.101081 master-0 kubenswrapper[6980]: I0313 12:40:18.101039 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf" gracePeriod=30 Mar 13 12:40:18.103027 master-0 kubenswrapper[6980]: I0313 12:40:18.102994 6980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:40:18.103257 master-0 kubenswrapper[6980]: E0313 12:40:18.103237 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:40:18.103332 master-0 kubenswrapper[6980]: I0313 12:40:18.103264 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:40:18.103332 master-0 kubenswrapper[6980]: E0313 12:40:18.103277 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:40:18.103332 master-0 kubenswrapper[6980]: I0313 12:40:18.103285 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:40:18.103332 master-0 kubenswrapper[6980]: E0313 12:40:18.103305 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="85ff9007-425d-4004-9a9d-25ac2b47761c" containerName="installer" Mar 13 12:40:18.103332 master-0 kubenswrapper[6980]: I0313 12:40:18.103314 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="85ff9007-425d-4004-9a9d-25ac2b47761c" containerName="installer" Mar 13 12:40:18.103554 master-0 kubenswrapper[6980]: I0313 12:40:18.103431 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 12:40:18.103554 master-0 kubenswrapper[6980]: I0313 12:40:18.103453 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 12:40:18.103554 master-0 kubenswrapper[6980]: I0313 12:40:18.103464 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="85ff9007-425d-4004-9a9d-25ac2b47761c" containerName="installer" Mar 13 12:40:18.105887 master-0 kubenswrapper[6980]: I0313 12:40:18.105843 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238000 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238059 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238112 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238133 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238162 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.238268 master-0 kubenswrapper[6980]: I0313 12:40:18.238180 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339253 master-0 kubenswrapper[6980]: I0313 12:40:18.339194 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339253 master-0 kubenswrapper[6980]: I0313 12:40:18.339260 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339304 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339421 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339491 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339556 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339591 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339655 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339681 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.339700 master-0 kubenswrapper[6980]: I0313 12:40:18.339698 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.340292 master-0 kubenswrapper[6980]: I0313 12:40:18.339720 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.340292 master-0 kubenswrapper[6980]: I0313 12:40:18.339738 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:40:18.619694 master-0 kubenswrapper[6980]: I0313 12:40:18.619645 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_f52d50d6-44fd-47d2-bca6-77be37c69694/installer/0.log" Mar 13 12:40:18.619892 master-0 kubenswrapper[6980]: I0313 12:40:18.619737 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:40:18.743986 master-0 kubenswrapper[6980]: I0313 12:40:18.743916 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir\") pod \"f52d50d6-44fd-47d2-bca6-77be37c69694\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " Mar 13 12:40:18.743986 master-0 kubenswrapper[6980]: I0313 12:40:18.744004 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock\") pod \"f52d50d6-44fd-47d2-bca6-77be37c69694\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " Mar 13 12:40:18.743986 master-0 kubenswrapper[6980]: I0313 12:40:18.744132 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access\") pod \"f52d50d6-44fd-47d2-bca6-77be37c69694\" (UID: \"f52d50d6-44fd-47d2-bca6-77be37c69694\") " Mar 13 12:40:18.751020 master-0 kubenswrapper[6980]: I0313 12:40:18.744341 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f52d50d6-44fd-47d2-bca6-77be37c69694" (UID: "f52d50d6-44fd-47d2-bca6-77be37c69694"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:18.751020 master-0 kubenswrapper[6980]: I0313 12:40:18.744396 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock" (OuterVolumeSpecName: "var-lock") pod "f52d50d6-44fd-47d2-bca6-77be37c69694" (UID: "f52d50d6-44fd-47d2-bca6-77be37c69694"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:18.757742 master-0 kubenswrapper[6980]: I0313 12:40:18.757680 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f52d50d6-44fd-47d2-bca6-77be37c69694" (UID: "f52d50d6-44fd-47d2-bca6-77be37c69694"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:18.845547 master-0 kubenswrapper[6980]: I0313 12:40:18.845460 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f52d50d6-44fd-47d2-bca6-77be37c69694-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:18.845547 master-0 kubenswrapper[6980]: I0313 12:40:18.845518 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:18.845547 master-0 kubenswrapper[6980]: I0313 12:40:18.845531 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f52d50d6-44fd-47d2-bca6-77be37c69694-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:18.868755 master-0 kubenswrapper[6980]: I0313 12:40:18.868612 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="425e18c5-3d11-4f04-be33-45fa3f035129" path="/var/lib/kubelet/pods/425e18c5-3d11-4f04-be33-45fa3f035129/volumes" Mar 13 12:40:18.869138 master-0 kubenswrapper[6980]: I0313 12:40:18.869109 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="85ff9007-425d-4004-9a9d-25ac2b47761c" path="/var/lib/kubelet/pods/85ff9007-425d-4004-9a9d-25ac2b47761c/volumes" Mar 13 12:40:19.006708 master-0 kubenswrapper[6980]: I0313 12:40:19.006640 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerStarted","Data":"cc996817afafd2df7fd421372b8e47516fdf24cdaea627bf1268ff842055a746"} Mar 13 12:40:19.009561 master-0 kubenswrapper[6980]: I0313 12:40:19.009381 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_f52d50d6-44fd-47d2-bca6-77be37c69694/installer/0.log" Mar 13 12:40:19.009561 master-0 kubenswrapper[6980]: I0313 12:40:19.009444 6980 generic.go:334] "Generic (PLEG): container finished" podID="f52d50d6-44fd-47d2-bca6-77be37c69694" containerID="a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083" exitCode=1 Mar 13 12:40:19.009561 master-0 kubenswrapper[6980]: I0313 12:40:19.009527 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 12:40:19.009561 master-0 kubenswrapper[6980]: I0313 12:40:19.009527 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f52d50d6-44fd-47d2-bca6-77be37c69694","Type":"ContainerDied","Data":"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083"} Mar 13 12:40:19.009819 master-0 kubenswrapper[6980]: I0313 12:40:19.009614 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"f52d50d6-44fd-47d2-bca6-77be37c69694","Type":"ContainerDied","Data":"5d430aa8dbd7c3018a7e05ad11fe92ea7c8db90db9b0a43b068c0c9e5ee73025"} Mar 13 12:40:19.009819 master-0 kubenswrapper[6980]: I0313 12:40:19.009643 6980 scope.go:117] "RemoveContainer" containerID="a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083" Mar 13 12:40:19.023737 master-0 kubenswrapper[6980]: I0313 12:40:19.023698 6980 scope.go:117] "RemoveContainer" containerID="a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083" Mar 13 12:40:19.024194 master-0 kubenswrapper[6980]: E0313 12:40:19.024152 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083\": container with ID starting with a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083 not found: ID does not exist" containerID="a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083" Mar 13 12:40:19.024269 master-0 kubenswrapper[6980]: I0313 12:40:19.024222 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083"} err="failed to get container status \"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083\": rpc error: code = NotFound desc = could not find container \"a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083\": container with ID starting with a248355f5c228d7e60be41b1c6315ad50d3f34eaed59b01dd3e1d11bb8a89083 not found: ID does not exist" Mar 13 12:40:22.188590 master-0 kubenswrapper[6980]: I0313 12:40:22.188179 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:40:24.068303 master-0 kubenswrapper[6980]: I0313 12:40:24.068133 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerStarted","Data":"2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396"} Mar 13 12:40:24.068941 master-0 kubenswrapper[6980]: I0313 12:40:24.068425 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:24.075422 master-0 kubenswrapper[6980]: I0313 12:40:24.075368 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:40:24.535496 master-0 kubenswrapper[6980]: I0313 12:40:24.535462 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_642c9e64-2d6f-4f0a-babf-8a54e0002415/installer/0.log" Mar 13 12:40:24.535791 master-0 kubenswrapper[6980]: I0313 12:40:24.535772 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:40:24.675420 master-0 kubenswrapper[6980]: I0313 12:40:24.675361 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access\") pod \"642c9e64-2d6f-4f0a-babf-8a54e0002415\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " Mar 13 12:40:24.675754 master-0 kubenswrapper[6980]: I0313 12:40:24.675440 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir\") pod \"642c9e64-2d6f-4f0a-babf-8a54e0002415\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " Mar 13 12:40:24.675754 master-0 kubenswrapper[6980]: I0313 12:40:24.675471 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock\") pod \"642c9e64-2d6f-4f0a-babf-8a54e0002415\" (UID: \"642c9e64-2d6f-4f0a-babf-8a54e0002415\") " Mar 13 12:40:24.675953 master-0 kubenswrapper[6980]: I0313 12:40:24.675880 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock" (OuterVolumeSpecName: "var-lock") pod "642c9e64-2d6f-4f0a-babf-8a54e0002415" (UID: "642c9e64-2d6f-4f0a-babf-8a54e0002415"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:24.675953 master-0 kubenswrapper[6980]: I0313 12:40:24.675919 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "642c9e64-2d6f-4f0a-babf-8a54e0002415" (UID: "642c9e64-2d6f-4f0a-babf-8a54e0002415"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:24.681457 master-0 kubenswrapper[6980]: I0313 12:40:24.681376 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "642c9e64-2d6f-4f0a-babf-8a54e0002415" (UID: "642c9e64-2d6f-4f0a-babf-8a54e0002415"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:24.777240 master-0 kubenswrapper[6980]: I0313 12:40:24.777100 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:24.777240 master-0 kubenswrapper[6980]: I0313 12:40:24.777184 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/642c9e64-2d6f-4f0a-babf-8a54e0002415-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:24.777240 master-0 kubenswrapper[6980]: I0313 12:40:24.777206 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/642c9e64-2d6f-4f0a-babf-8a54e0002415-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:25.078700 master-0 kubenswrapper[6980]: I0313 12:40:25.078540 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_642c9e64-2d6f-4f0a-babf-8a54e0002415/installer/0.log" Mar 13 12:40:25.079560 master-0 kubenswrapper[6980]: I0313 12:40:25.079517 6980 generic.go:334] "Generic (PLEG): container finished" podID="642c9e64-2d6f-4f0a-babf-8a54e0002415" containerID="6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44" exitCode=1 Mar 13 12:40:25.079854 master-0 kubenswrapper[6980]: I0313 12:40:25.079605 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"642c9e64-2d6f-4f0a-babf-8a54e0002415","Type":"ContainerDied","Data":"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44"} Mar 13 12:40:25.080012 master-0 kubenswrapper[6980]: I0313 12:40:25.079636 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 12:40:25.081207 master-0 kubenswrapper[6980]: I0313 12:40:25.080910 6980 scope.go:117] "RemoveContainer" containerID="6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44" Mar 13 12:40:25.081207 master-0 kubenswrapper[6980]: I0313 12:40:25.080811 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"642c9e64-2d6f-4f0a-babf-8a54e0002415","Type":"ContainerDied","Data":"9284574796f94098c86fc67adfe78da0327345414266aabaefa42affb7228984"} Mar 13 12:40:25.100886 master-0 kubenswrapper[6980]: I0313 12:40:25.100836 6980 scope.go:117] "RemoveContainer" containerID="6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44" Mar 13 12:40:25.101382 master-0 kubenswrapper[6980]: E0313 12:40:25.101318 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44\": container with ID starting with 6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44 not found: ID does not exist" containerID="6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44" Mar 13 12:40:25.101509 master-0 kubenswrapper[6980]: I0313 12:40:25.101381 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44"} err="failed to get container status \"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44\": rpc error: code = NotFound desc = could not find container \"6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44\": container with ID starting with 6d0c53c4b21bc4147655aff75430bf3c82edbcaa28e2e672ef1fa7d36f0d0a44 not found: ID does not exist" Mar 13 12:40:28.904002 master-0 kubenswrapper[6980]: I0313 12:40:28.903893 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:40:28.904731 master-0 kubenswrapper[6980]: I0313 12:40:28.904046 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:40:31.109271 master-0 kubenswrapper[6980]: I0313 12:40:31.109087 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e" exitCode=1 Mar 13 12:40:31.109271 master-0 kubenswrapper[6980]: I0313 12:40:31.109140 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e"} Mar 13 12:40:31.109919 master-0 kubenswrapper[6980]: I0313 12:40:31.109680 6980 scope.go:117] "RemoveContainer" containerID="9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e" Mar 13 12:40:31.173208 master-0 kubenswrapper[6980]: E0313 12:40:31.173105 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:31.173837 master-0 kubenswrapper[6980]: I0313 12:40:31.173782 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:31.857685 master-0 kubenswrapper[6980]: I0313 12:40:31.857560 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:32.116248 master-0 kubenswrapper[6980]: I0313 12:40:32.116081 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="f03432950be1db5b9603e1c8d4f0c02f9b3f872ef406dc3fb4113432dc294cf7" exitCode=0 Mar 13 12:40:32.116248 master-0 kubenswrapper[6980]: I0313 12:40:32.116180 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"f03432950be1db5b9603e1c8d4f0c02f9b3f872ef406dc3fb4113432dc294cf7"} Mar 13 12:40:32.116248 master-0 kubenswrapper[6980]: I0313 12:40:32.116235 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"f0e851477be2d69038712518ed1a5f5d94544dd20cc5ae90880136f09179a721"} Mar 13 12:40:32.118623 master-0 kubenswrapper[6980]: I0313 12:40:32.118590 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"63f3b75b31fa7fc52cd298f2c204c45e0576c862a52323ba1d17c643900efba4"} Mar 13 12:40:33.124390 master-0 kubenswrapper[6980]: I0313 12:40:33.124305 6980 generic.go:334] "Generic (PLEG): container finished" podID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerID="79cd707206ff99c36a959e487c7685688d55e645d476231af44713218abe6dab" exitCode=0 Mar 13 12:40:33.124390 master-0 kubenswrapper[6980]: I0313 12:40:33.124364 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"7028b88a-ef6e-47f7-bbd7-cf798efdded5","Type":"ContainerDied","Data":"79cd707206ff99c36a959e487c7685688d55e645d476231af44713218abe6dab"} Mar 13 12:40:34.396381 master-0 kubenswrapper[6980]: I0313 12:40:34.396280 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:40:34.541240 master-0 kubenswrapper[6980]: I0313 12:40:34.541120 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access\") pod \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " Mar 13 12:40:34.541240 master-0 kubenswrapper[6980]: I0313 12:40:34.541227 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir\") pod \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " Mar 13 12:40:34.541727 master-0 kubenswrapper[6980]: I0313 12:40:34.541281 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock\") pod \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\" (UID: \"7028b88a-ef6e-47f7-bbd7-cf798efdded5\") " Mar 13 12:40:34.541808 master-0 kubenswrapper[6980]: I0313 12:40:34.541717 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock" (OuterVolumeSpecName: "var-lock") pod "7028b88a-ef6e-47f7-bbd7-cf798efdded5" (UID: "7028b88a-ef6e-47f7-bbd7-cf798efdded5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:34.541898 master-0 kubenswrapper[6980]: I0313 12:40:34.541842 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7028b88a-ef6e-47f7-bbd7-cf798efdded5" (UID: "7028b88a-ef6e-47f7-bbd7-cf798efdded5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:34.546909 master-0 kubenswrapper[6980]: I0313 12:40:34.546854 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7028b88a-ef6e-47f7-bbd7-cf798efdded5" (UID: "7028b88a-ef6e-47f7-bbd7-cf798efdded5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:34.642753 master-0 kubenswrapper[6980]: I0313 12:40:34.642687 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:34.642753 master-0 kubenswrapper[6980]: I0313 12:40:34.642722 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:34.642753 master-0 kubenswrapper[6980]: I0313 12:40:34.642730 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7028b88a-ef6e-47f7-bbd7-cf798efdded5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:35.139593 master-0 kubenswrapper[6980]: I0313 12:40:35.139525 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"7028b88a-ef6e-47f7-bbd7-cf798efdded5","Type":"ContainerDied","Data":"0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782"} Mar 13 12:40:35.139850 master-0 kubenswrapper[6980]: I0313 12:40:35.139595 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782" Mar 13 12:40:35.140056 master-0 kubenswrapper[6980]: I0313 12:40:35.140021 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:40:35.140883 master-0 kubenswrapper[6980]: I0313 12:40:35.140840 6980 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a" exitCode=1 Mar 13 12:40:35.140947 master-0 kubenswrapper[6980]: I0313 12:40:35.140883 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a"} Mar 13 12:40:35.141208 master-0 kubenswrapper[6980]: I0313 12:40:35.141179 6980 scope.go:117] "RemoveContainer" containerID="5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a" Mar 13 12:40:35.339308 master-0 kubenswrapper[6980]: E0313 12:40:35.339237 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:35.429157 master-0 kubenswrapper[6980]: E0313 12:40:35.428506 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:40:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:40:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:40:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:40:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:36.148151 master-0 kubenswrapper[6980]: I0313 12:40:36.148084 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069"} Mar 13 12:40:38.737876 master-0 kubenswrapper[6980]: I0313 12:40:38.737812 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:38.738404 master-0 kubenswrapper[6980]: I0313 12:40:38.737901 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:38.844987 master-0 kubenswrapper[6980]: I0313 12:40:38.844894 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:38.844987 master-0 kubenswrapper[6980]: I0313 12:40:38.844977 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:38.904062 master-0 kubenswrapper[6980]: I0313 12:40:38.903995 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:40:38.904290 master-0 kubenswrapper[6980]: I0313 12:40:38.904069 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:40:39.166138 master-0 kubenswrapper[6980]: I0313 12:40:39.166037 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0feecf04-574d-4bf6-968d-77dd5c35260b/installer/0.log" Mar 13 12:40:39.166138 master-0 kubenswrapper[6980]: I0313 12:40:39.166112 6980 generic.go:334] "Generic (PLEG): container finished" podID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerID="10be8f9ca4ea6e67dd279190add6bee9a3985f10e4ddcd7b2a1c5c6e9e6e6409" exitCode=1 Mar 13 12:40:39.166670 master-0 kubenswrapper[6980]: I0313 12:40:39.166163 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0feecf04-574d-4bf6-968d-77dd5c35260b","Type":"ContainerDied","Data":"10be8f9ca4ea6e67dd279190add6bee9a3985f10e4ddcd7b2a1c5c6e9e6e6409"} Mar 13 12:40:40.137048 master-0 kubenswrapper[6980]: I0313 12:40:40.137003 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:40.436069 master-0 kubenswrapper[6980]: I0313 12:40:40.436018 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0feecf04-574d-4bf6-968d-77dd5c35260b/installer/0.log" Mar 13 12:40:40.436205 master-0 kubenswrapper[6980]: I0313 12:40:40.436120 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:40:40.577450 master-0 kubenswrapper[6980]: I0313 12:40:40.577376 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access\") pod \"0feecf04-574d-4bf6-968d-77dd5c35260b\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " Mar 13 12:40:40.577450 master-0 kubenswrapper[6980]: I0313 12:40:40.577425 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir\") pod \"0feecf04-574d-4bf6-968d-77dd5c35260b\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " Mar 13 12:40:40.577851 master-0 kubenswrapper[6980]: I0313 12:40:40.577493 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock\") pod \"0feecf04-574d-4bf6-968d-77dd5c35260b\" (UID: \"0feecf04-574d-4bf6-968d-77dd5c35260b\") " Mar 13 12:40:40.577851 master-0 kubenswrapper[6980]: I0313 12:40:40.577624 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "0feecf04-574d-4bf6-968d-77dd5c35260b" (UID: "0feecf04-574d-4bf6-968d-77dd5c35260b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:40.577851 master-0 kubenswrapper[6980]: I0313 12:40:40.577690 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock" (OuterVolumeSpecName: "var-lock") pod "0feecf04-574d-4bf6-968d-77dd5c35260b" (UID: "0feecf04-574d-4bf6-968d-77dd5c35260b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:40.577851 master-0 kubenswrapper[6980]: I0313 12:40:40.577831 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:40.577851 master-0 kubenswrapper[6980]: I0313 12:40:40.577844 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/0feecf04-574d-4bf6-968d-77dd5c35260b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:40.581012 master-0 kubenswrapper[6980]: I0313 12:40:40.580950 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0feecf04-574d-4bf6-968d-77dd5c35260b" (UID: "0feecf04-574d-4bf6-968d-77dd5c35260b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:40:40.679685 master-0 kubenswrapper[6980]: I0313 12:40:40.679442 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0feecf04-574d-4bf6-968d-77dd5c35260b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:41.177466 master-0 kubenswrapper[6980]: I0313 12:40:41.177403 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0feecf04-574d-4bf6-968d-77dd5c35260b/installer/0.log" Mar 13 12:40:41.178232 master-0 kubenswrapper[6980]: I0313 12:40:41.177485 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"0feecf04-574d-4bf6-968d-77dd5c35260b","Type":"ContainerDied","Data":"9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda"} Mar 13 12:40:41.178232 master-0 kubenswrapper[6980]: I0313 12:40:41.177526 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda" Mar 13 12:40:41.178232 master-0 kubenswrapper[6980]: I0313 12:40:41.177614 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:40:41.737775 master-0 kubenswrapper[6980]: I0313 12:40:41.737631 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:41.737775 master-0 kubenswrapper[6980]: I0313 12:40:41.737702 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:41.845002 master-0 kubenswrapper[6980]: I0313 12:40:41.844921 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:41.845287 master-0 kubenswrapper[6980]: I0313 12:40:41.845014 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:41.857481 master-0 kubenswrapper[6980]: I0313 12:40:41.857377 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:40:43.137554 master-0 kubenswrapper[6980]: I0313 12:40:43.137431 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:44.737398 master-0 kubenswrapper[6980]: I0313 12:40:44.737333 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:44.738053 master-0 kubenswrapper[6980]: I0313 12:40:44.737423 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:44.738053 master-0 kubenswrapper[6980]: I0313 12:40:44.737478 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:40:44.738258 master-0 kubenswrapper[6980]: I0313 12:40:44.738217 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 12:40:44.738324 master-0 kubenswrapper[6980]: I0313 12:40:44.738293 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" containerID="cri-o://24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" gracePeriod=30 Mar 13 12:40:44.844503 master-0 kubenswrapper[6980]: I0313 12:40:44.844459 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:44.844821 master-0 kubenswrapper[6980]: I0313 12:40:44.844789 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:44.844979 master-0 kubenswrapper[6980]: I0313 12:40:44.844964 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:40:45.215040 master-0 kubenswrapper[6980]: E0313 12:40:45.214957 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:45.339968 master-0 kubenswrapper[6980]: E0313 12:40:45.339806 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:45.429122 master-0 kubenswrapper[6980]: E0313 12:40:45.429071 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:46.204614 master-0 kubenswrapper[6980]: I0313 12:40:46.204531 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="f08c1a97d40cbbcf3932165b6ed54164f78f18d905db2ebc7a4ee45115dbb224" exitCode=0 Mar 13 12:40:46.205166 master-0 kubenswrapper[6980]: I0313 12:40:46.204632 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"f08c1a97d40cbbcf3932165b6ed54164f78f18d905db2ebc7a4ee45115dbb224"} Mar 13 12:40:46.207595 master-0 kubenswrapper[6980]: I0313 12:40:46.207435 6980 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf" exitCode=0 Mar 13 12:40:47.844960 master-0 kubenswrapper[6980]: I0313 12:40:47.844886 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:47.845534 master-0 kubenswrapper[6980]: I0313 12:40:47.844991 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:48.213796 master-0 kubenswrapper[6980]: I0313 12:40:48.213740 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 12:40:48.214195 master-0 kubenswrapper[6980]: I0313 12:40:48.214175 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:40:48.217065 master-0 kubenswrapper[6980]: I0313 12:40:48.217024 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 12:40:48.217159 master-0 kubenswrapper[6980]: I0313 12:40:48.217088 6980 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d" exitCode=137 Mar 13 12:40:48.217159 master-0 kubenswrapper[6980]: I0313 12:40:48.217153 6980 scope.go:117] "RemoveContainer" containerID="ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf" Mar 13 12:40:48.217313 master-0 kubenswrapper[6980]: I0313 12:40:48.217293 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:40:48.229229 master-0 kubenswrapper[6980]: I0313 12:40:48.229189 6980 scope.go:117] "RemoveContainer" containerID="e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d" Mar 13 12:40:48.242724 master-0 kubenswrapper[6980]: I0313 12:40:48.242505 6980 scope.go:117] "RemoveContainer" containerID="ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf" Mar 13 12:40:48.243128 master-0 kubenswrapper[6980]: E0313 12:40:48.243090 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf\": container with ID starting with ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf not found: ID does not exist" containerID="ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf" Mar 13 12:40:48.243207 master-0 kubenswrapper[6980]: I0313 12:40:48.243131 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf"} err="failed to get container status \"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf\": rpc error: code = NotFound desc = could not find container \"ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf\": container with ID starting with ce3f6b339ea2a74a006dcd1a07a64cb14d8bee77bd53c211d75c1578a17f98cf not found: ID does not exist" Mar 13 12:40:48.243207 master-0 kubenswrapper[6980]: I0313 12:40:48.243156 6980 scope.go:117] "RemoveContainer" containerID="e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d" Mar 13 12:40:48.243618 master-0 kubenswrapper[6980]: E0313 12:40:48.243534 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d\": container with ID starting with e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d not found: ID does not exist" containerID="e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d" Mar 13 12:40:48.243618 master-0 kubenswrapper[6980]: I0313 12:40:48.243567 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d"} err="failed to get container status \"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d\": rpc error: code = NotFound desc = could not find container \"e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d\": container with ID starting with e17fd23e7c462e5f8e8ff55fe9e572a3f346b9435f71be173f1473752850783d not found: ID does not exist" Mar 13 12:40:48.364080 master-0 kubenswrapper[6980]: I0313 12:40:48.363933 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 12:40:48.364080 master-0 kubenswrapper[6980]: I0313 12:40:48.364059 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 12:40:48.364362 master-0 kubenswrapper[6980]: I0313 12:40:48.364067 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:48.364362 master-0 kubenswrapper[6980]: I0313 12:40:48.364234 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:40:48.364640 master-0 kubenswrapper[6980]: I0313 12:40:48.364616 6980 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:48.364640 master-0 kubenswrapper[6980]: I0313 12:40:48.364639 6980 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:40:48.867273 master-0 kubenswrapper[6980]: I0313 12:40:48.867077 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 13 12:40:48.868019 master-0 kubenswrapper[6980]: I0313 12:40:48.867485 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:40:48.904035 master-0 kubenswrapper[6980]: I0313 12:40:48.903979 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:40:48.904454 master-0 kubenswrapper[6980]: I0313 12:40:48.904410 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:40:50.844626 master-0 kubenswrapper[6980]: I0313 12:40:50.844466 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:50.845934 master-0 kubenswrapper[6980]: I0313 12:40:50.845803 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:52.121998 master-0 kubenswrapper[6980]: E0313 12:40:52.121754 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c670385980cb8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:40:18.101030072 +0000 UTC m=+85.435024698,LastTimestamp:2026-03-13 12:40:18.101030072 +0000 UTC m=+85.435024698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:40:53.137727 master-0 kubenswrapper[6980]: I0313 12:40:53.137612 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:53.845093 master-0 kubenswrapper[6980]: I0313 12:40:53.844927 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:53.845093 master-0 kubenswrapper[6980]: I0313 12:40:53.845019 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:54.338360 master-0 kubenswrapper[6980]: I0313 12:40:54.338291 6980 generic.go:334] "Generic (PLEG): container finished" podID="1929440f-f2cc-450d-80ff-ded6788baa74" containerID="add6080be63d96ac6d15e6ae92fd130acd330b669019c0708be53e9f316105b4" exitCode=0 Mar 13 12:40:55.340435 master-0 kubenswrapper[6980]: E0313 12:40:55.340388 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:55.429727 master-0 kubenswrapper[6980]: E0313 12:40:55.429610 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:40:56.801400 master-0 kubenswrapper[6980]: E0313 12:40:56.801177 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[cluster-monitoring-operator-tls], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" podUID="71b741d4-3899-4d31-afd1-72f5a9321f75" Mar 13 12:40:56.802153 master-0 kubenswrapper[6980]: E0313 12:40:56.801970 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[package-server-manager-serving-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" podUID="20217cff-2f81-4a56-9c15-28385c19258c" Mar 13 12:40:56.803233 master-0 kubenswrapper[6980]: E0313 12:40:56.803174 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[srv-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" podUID="8226ffac-1f76-4eaa-ada5-056b5fd031b4" Mar 13 12:40:56.804363 master-0 kubenswrapper[6980]: E0313 12:40:56.804303 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[marketplace-operator-metrics], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" podUID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" Mar 13 12:40:56.816680 master-0 kubenswrapper[6980]: E0313 12:40:56.816612 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[metrics-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/network-metrics-daemon-ztpxf" podUID="59c9773d-7e88-4e30-9b8a-792a869a860e" Mar 13 12:40:56.819831 master-0 kubenswrapper[6980]: E0313 12:40:56.819771 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[webhook-certs], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" Mar 13 12:40:56.821982 master-0 kubenswrapper[6980]: E0313 12:40:56.821933 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[srv-cert], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" podUID="2a5976df-0366-47b3-bc54-1ba7c249e87c" Mar 13 12:40:56.844859 master-0 kubenswrapper[6980]: I0313 12:40:56.844782 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:56.845063 master-0 kubenswrapper[6980]: I0313 12:40:56.844860 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:40:57.352364 master-0 kubenswrapper[6980]: I0313 12:40:57.352308 6980 generic.go:334] "Generic (PLEG): container finished" podID="0d868028-9984-472a-8403-ffed767e1bf8" containerID="8d3d7c80d1f091cb6801c4897cba8089f08217db69ec67d4a437f0167c034ba9" exitCode=0 Mar 13 12:40:58.359683 master-0 kubenswrapper[6980]: I0313 12:40:58.359512 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="cf479c6d6c1b4d3fb1e4d8c534df6ecd64180a47813aaab693ac30875cb0165f" exitCode=0 Mar 13 12:40:59.211969 master-0 kubenswrapper[6980]: E0313 12:40:59.211911 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:40:59.845094 master-0 kubenswrapper[6980]: I0313 12:40:59.844990 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:40:59.845788 master-0 kubenswrapper[6980]: I0313 12:40:59.845111 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:00.372914 master-0 kubenswrapper[6980]: I0313 12:41:00.372828 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="1548830c5fd6aedb1c3d4d7d2384fdb131b3d8e72ab94a40c5ef20cdca9c52d5" exitCode=0 Mar 13 12:41:00.375708 master-0 kubenswrapper[6980]: I0313 12:41:00.375679 6980 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="a96046bbc6e2f7a9efce1073fbf280ed5ef6a4fec79a22f6b7f77fdfe7b84349" exitCode=0 Mar 13 12:41:01.745322 master-0 kubenswrapper[6980]: I0313 12:41:01.744566 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:41:01.746035 master-0 kubenswrapper[6980]: I0313 12:41:01.745494 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:41:01.746035 master-0 kubenswrapper[6980]: I0313 12:41:01.745750 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:41:01.746035 master-0 kubenswrapper[6980]: I0313 12:41:01.745851 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:41:01.746317 master-0 kubenswrapper[6980]: I0313 12:41:01.746216 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:41:01.746474 master-0 kubenswrapper[6980]: I0313 12:41:01.746423 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:41:01.746517 master-0 kubenswrapper[6980]: I0313 12:41:01.746478 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:41:01.748106 master-0 kubenswrapper[6980]: I0313 12:41:01.748043 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:41:01.750017 master-0 kubenswrapper[6980]: I0313 12:41:01.749976 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:41:01.750174 master-0 kubenswrapper[6980]: I0313 12:41:01.750135 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:41:01.750595 master-0 kubenswrapper[6980]: I0313 12:41:01.750466 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:41:01.750811 master-0 kubenswrapper[6980]: I0313 12:41:01.750766 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:41:01.751037 master-0 kubenswrapper[6980]: I0313 12:41:01.750978 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:41:01.751664 master-0 kubenswrapper[6980]: I0313 12:41:01.751538 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:41:02.844142 master-0 kubenswrapper[6980]: I0313 12:41:02.844060 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:02.844142 master-0 kubenswrapper[6980]: I0313 12:41:02.844139 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:03.137157 master-0 kubenswrapper[6980]: I0313 12:41:03.137076 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:05.341600 master-0 kubenswrapper[6980]: E0313 12:41:05.341480 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:05.432150 master-0 kubenswrapper[6980]: E0313 12:41:05.431998 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:05.845291 master-0 kubenswrapper[6980]: I0313 12:41:05.845167 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:05.845291 master-0 kubenswrapper[6980]: I0313 12:41:05.845275 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:08.411094 master-0 kubenswrapper[6980]: I0313 12:41:08.411033 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kb5r7_cf580693-2931-4fef-adb5-b396f7303352/approver/0.log" Mar 13 12:41:08.411880 master-0 kubenswrapper[6980]: I0313 12:41:08.411455 6980 generic.go:334] "Generic (PLEG): container finished" podID="cf580693-2931-4fef-adb5-b396f7303352" containerID="bab02b7b0881c5a887bb7f5e343fcd3261971bd3b26625df2ad95a1d14f0e4fa" exitCode=1 Mar 13 12:41:08.845104 master-0 kubenswrapper[6980]: I0313 12:41:08.844946 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:08.845104 master-0 kubenswrapper[6980]: I0313 12:41:08.845033 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:11.844806 master-0 kubenswrapper[6980]: I0313 12:41:11.844750 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:11.845203 master-0 kubenswrapper[6980]: I0313 12:41:11.844844 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:12.430963 master-0 kubenswrapper[6980]: I0313 12:41:12.430866 6980 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" exitCode=0 Mar 13 12:41:14.443464 master-0 kubenswrapper[6980]: I0313 12:41:14.443332 6980 generic.go:334] "Generic (PLEG): container finished" podID="603fef71-e0cd-4617-bd8a-a55580578c2f" containerID="a593c0e3cdcdc60e311759e5407d46a2222b3d9d443d63f109618c4b09858401" exitCode=0 Mar 13 12:41:14.738150 master-0 kubenswrapper[6980]: I0313 12:41:14.737872 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:14.738150 master-0 kubenswrapper[6980]: I0313 12:41:14.738020 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:14.845296 master-0 kubenswrapper[6980]: I0313 12:41:14.845201 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:14.845296 master-0 kubenswrapper[6980]: I0313 12:41:14.845289 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:15.342058 master-0 kubenswrapper[6980]: E0313 12:41:15.341912 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:15.342394 master-0 kubenswrapper[6980]: I0313 12:41:15.342144 6980 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:41:15.433513 master-0 kubenswrapper[6980]: E0313 12:41:15.432825 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:15.433513 master-0 kubenswrapper[6980]: E0313 12:41:15.433453 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:41:17.762817 master-0 kubenswrapper[6980]: I0313 12:41:17.762700 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:17.763911 master-0 kubenswrapper[6980]: I0313 12:41:17.762827 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:17.844497 master-0 kubenswrapper[6980]: I0313 12:41:17.844407 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:17.844833 master-0 kubenswrapper[6980]: I0313 12:41:17.844533 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:18.806101 master-0 kubenswrapper[6980]: I0313 12:41:18.805985 6980 status_manager.go:851] "Failed to get status for pod" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" pod="openshift-kube-scheduler/installer-2-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-2-master-0)" Mar 13 12:41:20.738052 master-0 kubenswrapper[6980]: I0313 12:41:20.737935 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:20.738052 master-0 kubenswrapper[6980]: I0313 12:41:20.738046 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:20.845007 master-0 kubenswrapper[6980]: I0313 12:41:20.844892 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:20.845007 master-0 kubenswrapper[6980]: I0313 12:41:20.845011 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:22.870433 master-0 kubenswrapper[6980]: E0313 12:41:22.870330 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:41:22.871496 master-0 kubenswrapper[6980]: E0313 12:41:22.870513 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.012s" Mar 13 12:41:22.871496 master-0 kubenswrapper[6980]: I0313 12:41:22.870546 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:41:22.871496 master-0 kubenswrapper[6980]: I0313 12:41:22.871303 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 12:41:22.871496 master-0 kubenswrapper[6980]: I0313 12:41:22.871364 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" containerID="cri-o://e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" gracePeriod=30 Mar 13 12:41:22.872027 master-0 kubenswrapper[6980]: I0313 12:41:22.871766 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:22.872027 master-0 kubenswrapper[6980]: I0313 12:41:22.871864 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:22.877647 master-0 kubenswrapper[6980]: I0313 12:41:22.877551 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:41:23.491865 master-0 kubenswrapper[6980]: I0313 12:41:23.491734 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/1.log" Mar 13 12:41:23.492876 master-0 kubenswrapper[6980]: I0313 12:41:23.492832 6980 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" exitCode=255 Mar 13 12:41:23.845298 master-0 kubenswrapper[6980]: I0313 12:41:23.845089 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:23.845298 master-0 kubenswrapper[6980]: I0313 12:41:23.845234 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:25.343041 master-0 kubenswrapper[6980]: E0313 12:41:25.342939 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 12:41:26.125230 master-0 kubenswrapper[6980]: E0313 12:41:26.125046 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-operator-68bd585b-hsrbc.189c670393000f0f openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator-68bd585b-hsrbc,UID:684c9067-189a-4f50-ac8d-97111aa73d9c,APIVersion:v1,ResourceVersion:3710,FieldPath:spec.containers{kube-apiserver-operator},},Reason:Created,Message:Created container: kube-apiserver-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:40:18.325950223 +0000 UTC m=+85.659944849,LastTimestamp:2026-03-13 12:40:18.325950223 +0000 UTC m=+85.659944849,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:41:26.844702 master-0 kubenswrapper[6980]: I0313 12:41:26.844658 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:26.845462 master-0 kubenswrapper[6980]: I0313 12:41:26.845421 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:29.845385 master-0 kubenswrapper[6980]: I0313 12:41:29.845311 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:29.845993 master-0 kubenswrapper[6980]: I0313 12:41:29.845398 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:32.537258 master-0 kubenswrapper[6980]: I0313 12:41:32.537066 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="63f3b75b31fa7fc52cd298f2c204c45e0576c862a52323ba1d17c643900efba4" exitCode=1 Mar 13 12:41:32.844765 master-0 kubenswrapper[6980]: I0313 12:41:32.844566 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:32.844765 master-0 kubenswrapper[6980]: I0313 12:41:32.844691 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:35.544844 master-0 kubenswrapper[6980]: E0313 12:41:35.544737 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 12:41:35.551350 master-0 kubenswrapper[6980]: I0313 12:41:35.551282 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_aae10aa9-9c7d-4319-9829-e900af7df301/installer/0.log" Mar 13 12:41:35.551561 master-0 kubenswrapper[6980]: I0313 12:41:35.551341 6980 generic.go:334] "Generic (PLEG): container finished" podID="aae10aa9-9c7d-4319-9829-e900af7df301" containerID="4bc1f7c933d28f40b13d28985334ae240170d114b669b057cc93fee9fb9f7a73" exitCode=1 Mar 13 12:41:35.588232 master-0 kubenswrapper[6980]: E0313 12:41:35.588023 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:41:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:41:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:41:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:41:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:35.844960 master-0 kubenswrapper[6980]: I0313 12:41:35.844676 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:35.844960 master-0 kubenswrapper[6980]: I0313 12:41:35.844786 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:38.844387 master-0 kubenswrapper[6980]: I0313 12:41:38.844314 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:38.848631 master-0 kubenswrapper[6980]: I0313 12:41:38.844384 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:41.845013 master-0 kubenswrapper[6980]: I0313 12:41:41.844927 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:41.845637 master-0 kubenswrapper[6980]: I0313 12:41:41.845014 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:44.845044 master-0 kubenswrapper[6980]: I0313 12:41:44.844970 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:44.845665 master-0 kubenswrapper[6980]: I0313 12:41:44.845042 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:45.588971 master-0 kubenswrapper[6980]: E0313 12:41:45.588851 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:45.946097 master-0 kubenswrapper[6980]: E0313 12:41:45.945998 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 12:41:47.844710 master-0 kubenswrapper[6980]: I0313 12:41:47.844642 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:47.845261 master-0 kubenswrapper[6980]: I0313 12:41:47.844715 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:49.613636 master-0 kubenswrapper[6980]: I0313 12:41:49.613595 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/1.log" Mar 13 12:41:49.615032 master-0 kubenswrapper[6980]: I0313 12:41:49.614971 6980 generic.go:334] "Generic (PLEG): container finished" podID="684c9067-189a-4f50-ac8d-97111aa73d9c" containerID="cc996817afafd2df7fd421372b8e47516fdf24cdaea627bf1268ff842055a746" exitCode=255 Mar 13 12:41:50.845059 master-0 kubenswrapper[6980]: I0313 12:41:50.844995 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:50.845878 master-0 kubenswrapper[6980]: I0313 12:41:50.845797 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:53.844533 master-0 kubenswrapper[6980]: I0313 12:41:53.844423 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:53.844533 master-0 kubenswrapper[6980]: I0313 12:41:53.844519 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:55.590527 master-0 kubenswrapper[6980]: E0313 12:41:55.590425 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:41:56.747658 master-0 kubenswrapper[6980]: E0313 12:41:56.747560 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="the server was unable to return a response in the time allotted, but may still be processing the request (get leases.coordination.k8s.io master-0)" interval="1.6s" Mar 13 12:41:56.844658 master-0 kubenswrapper[6980]: I0313 12:41:56.844608 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:56.845028 master-0 kubenswrapper[6980]: I0313 12:41:56.844989 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:41:56.880603 master-0 kubenswrapper[6980]: E0313 12:41:56.880521 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:41:56.880880 master-0 kubenswrapper[6980]: E0313 12:41:56.880733 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.01s" Mar 13 12:41:56.880880 master-0 kubenswrapper[6980]: I0313 12:41:56.880757 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:41:56.881627 master-0 kubenswrapper[6980]: I0313 12:41:56.881569 6980 scope.go:117] "RemoveContainer" containerID="cf479c6d6c1b4d3fb1e4d8c534df6ecd64180a47813aaab693ac30875cb0165f" Mar 13 12:41:56.887223 master-0 kubenswrapper[6980]: I0313 12:41:56.887178 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:41:59.845035 master-0 kubenswrapper[6980]: I0313 12:41:59.844894 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:41:59.845035 master-0 kubenswrapper[6980]: I0313 12:41:59.845017 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:00.128241 master-0 kubenswrapper[6980]: E0313 12:42:00.127989 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-operator-68bd585b-hsrbc.189c6703939e025c openshift-kube-apiserver-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver-operator,Name:kube-apiserver-operator-68bd585b-hsrbc,UID:684c9067-189a-4f50-ac8d-97111aa73d9c,APIVersion:v1,ResourceVersion:3710,FieldPath:spec.containers{kube-apiserver-operator},},Reason:Started,Message:Started container kube-apiserver-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:40:18.33630166 +0000 UTC m=+85.670296276,LastTimestamp:2026-03-13 12:40:18.33630166 +0000 UTC m=+85.670296276,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:42:02.844370 master-0 kubenswrapper[6980]: I0313 12:42:02.844288 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:02.844370 master-0 kubenswrapper[6980]: I0313 12:42:02.844362 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:05.591276 master-0 kubenswrapper[6980]: E0313 12:42:05.590913 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:05.845041 master-0 kubenswrapper[6980]: I0313 12:42:05.844858 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:05.845041 master-0 kubenswrapper[6980]: I0313 12:42:05.844993 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:07.705770 master-0 kubenswrapper[6980]: I0313 12:42:07.705693 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/0.log" Mar 13 12:42:07.705770 master-0 kubenswrapper[6980]: I0313 12:42:07.705756 6980 generic.go:334] "Generic (PLEG): container finished" podID="c1213b50-28bf-43ff-94c4-20616907735b" containerID="5568c74bf78103146825d0653ed59a230ea4678a37b99c81a8ff3d46062174bd" exitCode=1 Mar 13 12:42:08.350282 master-0 kubenswrapper[6980]: E0313 12:42:08.350167 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 13 12:42:08.845412 master-0 kubenswrapper[6980]: I0313 12:42:08.845316 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:08.846132 master-0 kubenswrapper[6980]: I0313 12:42:08.845409 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:11.844836 master-0 kubenswrapper[6980]: I0313 12:42:11.844738 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:11.845616 master-0 kubenswrapper[6980]: I0313 12:42:11.844846 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:14.844218 master-0 kubenswrapper[6980]: I0313 12:42:14.844146 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:14.845206 master-0 kubenswrapper[6980]: I0313 12:42:14.844247 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:15.591700 master-0 kubenswrapper[6980]: E0313 12:42:15.591256 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:15.591700 master-0 kubenswrapper[6980]: E0313 12:42:15.591310 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:42:17.844794 master-0 kubenswrapper[6980]: I0313 12:42:17.844692 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:17.845439 master-0 kubenswrapper[6980]: I0313 12:42:17.844815 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:18.807486 master-0 kubenswrapper[6980]: I0313 12:42:18.807416 6980 status_manager.go:851] "Failed to get status for pod" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" pod="openshift-kube-apiserver/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 13 12:42:20.845033 master-0 kubenswrapper[6980]: I0313 12:42:20.844955 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:20.845033 master-0 kubenswrapper[6980]: I0313 12:42:20.845023 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:21.551259 master-0 kubenswrapper[6980]: E0313 12:42:21.551123 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 12:42:24.844944 master-0 kubenswrapper[6980]: I0313 12:42:24.844824 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:24.845713 master-0 kubenswrapper[6980]: I0313 12:42:24.844963 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:26.341952 master-0 kubenswrapper[6980]: I0313 12:42:26.341907 6980 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-rcfgn container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Mar 13 12:42:26.342539 master-0 kubenswrapper[6980]: I0313 12:42:26.342506 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" podUID="eda319d8-825a-4881-96a9-5386b87f8a4f" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/readyz\": dial tcp 10.128.0.38:8081: connect: connection refused" Mar 13 12:42:26.342697 master-0 kubenswrapper[6980]: I0313 12:42:26.341957 6980 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-rcfgn container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" start-of-body= Mar 13 12:42:26.342778 master-0 kubenswrapper[6980]: I0313 12:42:26.342740 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" podUID="eda319d8-825a-4881-96a9-5386b87f8a4f" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.38:8081/healthz\": dial tcp 10.128.0.38:8081: connect: connection refused" Mar 13 12:42:26.789408 master-0 kubenswrapper[6980]: I0313 12:42:26.789345 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-rcfgn_eda319d8-825a-4881-96a9-5386b87f8a4f/manager/0.log" Mar 13 12:42:26.789714 master-0 kubenswrapper[6980]: I0313 12:42:26.789409 6980 generic.go:334] "Generic (PLEG): container finished" podID="eda319d8-825a-4881-96a9-5386b87f8a4f" containerID="cbb2865534497635b5ca625e2074d592be0ad7241931d751a9044f1c282a4c0f" exitCode=1 Mar 13 12:42:27.845111 master-0 kubenswrapper[6980]: I0313 12:42:27.844988 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:27.845757 master-0 kubenswrapper[6980]: I0313 12:42:27.845120 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:28.805001 master-0 kubenswrapper[6980]: I0313 12:42:28.804936 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/0.log" Mar 13 12:42:28.805001 master-0 kubenswrapper[6980]: I0313 12:42:28.804997 6980 generic.go:334] "Generic (PLEG): container finished" podID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" containerID="6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e" exitCode=1 Mar 13 12:42:30.129722 master-0 kubenswrapper[6980]: I0313 12:42:30.129616 6980 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-lwxxn container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Mar 13 12:42:30.130295 master-0 kubenswrapper[6980]: I0313 12:42:30.130258 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" podUID="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/readyz\": dial tcp 10.128.0.36:8081: connect: connection refused" Mar 13 12:42:30.130432 master-0 kubenswrapper[6980]: I0313 12:42:30.129640 6980 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-lwxxn container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" start-of-body= Mar 13 12:42:30.130513 master-0 kubenswrapper[6980]: I0313 12:42:30.130460 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" podUID="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.36:8081/healthz\": dial tcp 10.128.0.36:8081: connect: connection refused" Mar 13 12:42:30.823084 master-0 kubenswrapper[6980]: I0313 12:42:30.823019 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/manager/0.log" Mar 13 12:42:30.823757 master-0 kubenswrapper[6980]: I0313 12:42:30.823701 6980 generic.go:334] "Generic (PLEG): container finished" podID="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" containerID="30b31c049d6bbc747c9d176a9321b53f132ec100e2bcb266f862f58f0efabb73" exitCode=1 Mar 13 12:42:30.844073 master-0 kubenswrapper[6980]: I0313 12:42:30.843987 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:30.844246 master-0 kubenswrapper[6980]: I0313 12:42:30.844084 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:30.890010 master-0 kubenswrapper[6980]: E0313 12:42:30.889930 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:42:30.890254 master-0 kubenswrapper[6980]: E0313 12:42:30.890173 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.009s" Mar 13 12:42:30.890471 master-0 kubenswrapper[6980]: I0313 12:42:30.890428 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:42:30.890547 master-0 kubenswrapper[6980]: I0313 12:42:30.890473 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:42:30.890653 master-0 kubenswrapper[6980]: I0313 12:42:30.890635 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:42:30.891037 master-0 kubenswrapper[6980]: I0313 12:42:30.890978 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:42:30.891157 master-0 kubenswrapper[6980]: I0313 12:42:30.891077 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:42:30.891221 master-0 kubenswrapper[6980]: I0313 12:42:30.891127 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:42:30.891302 master-0 kubenswrapper[6980]: I0313 12:42:30.891283 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:42:30.891377 master-0 kubenswrapper[6980]: I0313 12:42:30.891314 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:42:30.891659 master-0 kubenswrapper[6980]: I0313 12:42:30.891596 6980 scope.go:117] "RemoveContainer" containerID="a96046bbc6e2f7a9efce1073fbf280ed5ef6a4fec79a22f6b7f77fdfe7b84349" Mar 13 12:42:30.891734 master-0 kubenswrapper[6980]: I0313 12:42:30.891654 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:42:30.892785 master-0 kubenswrapper[6980]: I0313 12:42:30.892184 6980 scope.go:117] "RemoveContainer" containerID="bab02b7b0881c5a887bb7f5e343fcd3261971bd3b26625df2ad95a1d14f0e4fa" Mar 13 12:42:30.892785 master-0 kubenswrapper[6980]: I0313 12:42:30.892341 6980 scope.go:117] "RemoveContainer" containerID="8d3d7c80d1f091cb6801c4897cba8089f08217db69ec67d4a437f0167c034ba9" Mar 13 12:42:30.892955 master-0 kubenswrapper[6980]: I0313 12:42:30.892849 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:42:30.893187 master-0 kubenswrapper[6980]: I0313 12:42:30.893144 6980 scope.go:117] "RemoveContainer" containerID="add6080be63d96ac6d15e6ae92fd130acd330b669019c0708be53e9f316105b4" Mar 13 12:42:30.909728 master-0 kubenswrapper[6980]: I0313 12:42:30.909690 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:42:30.910023 master-0 kubenswrapper[6980]: I0313 12:42:30.909997 6980 scope.go:117] "RemoveContainer" containerID="cbb2865534497635b5ca625e2074d592be0ad7241931d751a9044f1c282a4c0f" Mar 13 12:42:30.910295 master-0 kubenswrapper[6980]: I0313 12:42:30.910241 6980 scope.go:117] "RemoveContainer" containerID="a593c0e3cdcdc60e311759e5407d46a2222b3d9d443d63f109618c4b09858401" Mar 13 12:42:30.910528 master-0 kubenswrapper[6980]: I0313 12:42:30.910355 6980 scope.go:117] "RemoveContainer" containerID="63f3b75b31fa7fc52cd298f2c204c45e0576c862a52323ba1d17c643900efba4" Mar 13 12:42:30.912351 master-0 kubenswrapper[6980]: I0313 12:42:30.912246 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:42:30.913400 master-0 kubenswrapper[6980]: I0313 12:42:30.913143 6980 scope.go:117] "RemoveContainer" containerID="30b31c049d6bbc747c9d176a9321b53f132ec100e2bcb266f862f58f0efabb73" Mar 13 12:42:30.913486 master-0 kubenswrapper[6980]: I0313 12:42:30.913452 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:42:30.914844 master-0 kubenswrapper[6980]: I0313 12:42:30.914806 6980 scope.go:117] "RemoveContainer" containerID="cc996817afafd2df7fd421372b8e47516fdf24cdaea627bf1268ff842055a746" Mar 13 12:42:30.915215 master-0 kubenswrapper[6980]: I0313 12:42:30.915177 6980 scope.go:117] "RemoveContainer" containerID="6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e" Mar 13 12:42:30.915321 master-0 kubenswrapper[6980]: I0313 12:42:30.915287 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:42:30.915593 master-0 kubenswrapper[6980]: I0313 12:42:30.915563 6980 scope.go:117] "RemoveContainer" containerID="5568c74bf78103146825d0653ed59a230ea4678a37b99c81a8ff3d46062174bd" Mar 13 12:42:30.916532 master-0 kubenswrapper[6980]: I0313 12:42:30.916491 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:42:31.836384 master-0 kubenswrapper[6980]: I0313 12:42:31.836321 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/manager/0.log" Mar 13 12:42:31.847819 master-0 kubenswrapper[6980]: I0313 12:42:31.847748 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/0.log" Mar 13 12:42:31.851091 master-0 kubenswrapper[6980]: I0313 12:42:31.851048 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-rcfgn_eda319d8-825a-4881-96a9-5386b87f8a4f/manager/0.log" Mar 13 12:42:31.853297 master-0 kubenswrapper[6980]: I0313 12:42:31.853248 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/1.log" Mar 13 12:42:31.855787 master-0 kubenswrapper[6980]: I0313 12:42:31.855719 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kb5r7_cf580693-2931-4fef-adb5-b396f7303352/approver/0.log" Mar 13 12:42:31.857551 master-0 kubenswrapper[6980]: I0313 12:42:31.857510 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/0.log" Mar 13 12:42:31.892401 master-0 kubenswrapper[6980]: I0313 12:42:31.892292 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:31.892650 master-0 kubenswrapper[6980]: I0313 12:42:31.892432 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:32.132144 master-0 kubenswrapper[6980]: I0313 12:42:32.132033 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_aae10aa9-9c7d-4319-9829-e900af7df301/installer/0.log" Mar 13 12:42:32.132144 master-0 kubenswrapper[6980]: I0313 12:42:32.132105 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:42:32.231910 master-0 kubenswrapper[6980]: I0313 12:42:32.231817 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock\") pod \"aae10aa9-9c7d-4319-9829-e900af7df301\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " Mar 13 12:42:32.232187 master-0 kubenswrapper[6980]: I0313 12:42:32.231971 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock" (OuterVolumeSpecName: "var-lock") pod "aae10aa9-9c7d-4319-9829-e900af7df301" (UID: "aae10aa9-9c7d-4319-9829-e900af7df301"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:42:32.232187 master-0 kubenswrapper[6980]: I0313 12:42:32.232004 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access\") pod \"aae10aa9-9c7d-4319-9829-e900af7df301\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " Mar 13 12:42:32.232187 master-0 kubenswrapper[6980]: I0313 12:42:32.232087 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir\") pod \"aae10aa9-9c7d-4319-9829-e900af7df301\" (UID: \"aae10aa9-9c7d-4319-9829-e900af7df301\") " Mar 13 12:42:32.232503 master-0 kubenswrapper[6980]: I0313 12:42:32.232471 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:42:32.232556 master-0 kubenswrapper[6980]: I0313 12:42:32.232511 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "aae10aa9-9c7d-4319-9829-e900af7df301" (UID: "aae10aa9-9c7d-4319-9829-e900af7df301"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:42:32.235718 master-0 kubenswrapper[6980]: I0313 12:42:32.235677 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "aae10aa9-9c7d-4319-9829-e900af7df301" (UID: "aae10aa9-9c7d-4319-9829-e900af7df301"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:42:32.333380 master-0 kubenswrapper[6980]: I0313 12:42:32.333289 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aae10aa9-9c7d-4319-9829-e900af7df301-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:42:32.333380 master-0 kubenswrapper[6980]: I0313 12:42:32.333336 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aae10aa9-9c7d-4319-9829-e900af7df301-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:42:32.865608 master-0 kubenswrapper[6980]: I0313 12:42:32.865341 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_aae10aa9-9c7d-4319-9829-e900af7df301/installer/0.log" Mar 13 12:42:32.866322 master-0 kubenswrapper[6980]: I0313 12:42:32.865655 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:42:33.738606 master-0 kubenswrapper[6980]: I0313 12:42:33.738506 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:33.738910 master-0 kubenswrapper[6980]: I0313 12:42:33.738639 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:33.845143 master-0 kubenswrapper[6980]: I0313 12:42:33.845052 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:33.845143 master-0 kubenswrapper[6980]: I0313 12:42:33.845147 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:34.131953 master-0 kubenswrapper[6980]: E0313 12:42:34.131528 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{controller-manager-5ff9c7cb47-f4k6t.189c6704d3870c71 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-5ff9c7cb47-f4k6t,UID:7343df96-cba2-477b-8a1b-7af369620440,APIVersion:v1,ResourceVersion:7793,FieldPath:spec.containers{controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\" in 5.94s (5.94s including waiting). Image size: 558210153 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:40:23.703506033 +0000 UTC m=+91.037500659,LastTimestamp:2026-03-13 12:40:23.703506033 +0000 UTC m=+91.037500659,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:42:35.994171 master-0 kubenswrapper[6980]: E0313 12:42:35.993905 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:42:25Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:42:25Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:42:25Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:42:25Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:36.737842 master-0 kubenswrapper[6980]: I0313 12:42:36.737679 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:36.738155 master-0 kubenswrapper[6980]: I0313 12:42:36.737850 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:36.844993 master-0 kubenswrapper[6980]: I0313 12:42:36.844796 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:36.845343 master-0 kubenswrapper[6980]: I0313 12:42:36.845012 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:37.952934 master-0 kubenswrapper[6980]: E0313 12:42:37.952748 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:42:39.738557 master-0 kubenswrapper[6980]: I0313 12:42:39.738449 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:39.739219 master-0 kubenswrapper[6980]: I0313 12:42:39.738618 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:39.845596 master-0 kubenswrapper[6980]: I0313 12:42:39.845469 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:39.845836 master-0 kubenswrapper[6980]: I0313 12:42:39.845663 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:42.844981 master-0 kubenswrapper[6980]: I0313 12:42:42.844863 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:42.845774 master-0 kubenswrapper[6980]: I0313 12:42:42.845017 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:43.919542 master-0 kubenswrapper[6980]: E0313 12:42:43.919468 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 12:42:45.844812 master-0 kubenswrapper[6980]: I0313 12:42:45.844708 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:45.844812 master-0 kubenswrapper[6980]: I0313 12:42:45.844812 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:45.995228 master-0 kubenswrapper[6980]: E0313 12:42:45.995100 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:48.844309 master-0 kubenswrapper[6980]: I0313 12:42:48.844214 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:48.845028 master-0 kubenswrapper[6980]: I0313 12:42:48.844333 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:51.844388 master-0 kubenswrapper[6980]: I0313 12:42:51.844285 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:42:51.844989 master-0 kubenswrapper[6980]: I0313 12:42:51.844396 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:53.844392 master-0 kubenswrapper[6980]: I0313 12:42:53.844310 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:53.844392 master-0 kubenswrapper[6980]: I0313 12:42:53.844377 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:53.987990 master-0 kubenswrapper[6980]: I0313 12:42:53.987914 6980 generic.go:334] "Generic (PLEG): container finished" podID="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" containerID="8c953c8136772ca565e28cae4ca94f4cbf7b11aff2c6a974b20aeadfaf72a3c5" exitCode=0 Mar 13 12:42:53.989825 master-0 kubenswrapper[6980]: I0313 12:42:53.989777 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/2.log" Mar 13 12:42:53.990547 master-0 kubenswrapper[6980]: I0313 12:42:53.990506 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/1.log" Mar 13 12:42:53.991661 master-0 kubenswrapper[6980]: I0313 12:42:53.991588 6980 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="e28983293a268eeb3a8d9ee62a01d7522220ca8af6806ced1e6376b80a8ffbde" exitCode=255 Mar 13 12:42:54.954325 master-0 kubenswrapper[6980]: E0313 12:42:54.954211 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:42:55.995802 master-0 kubenswrapper[6980]: E0313 12:42:55.995735 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:42:56.844722 master-0 kubenswrapper[6980]: I0313 12:42:56.844540 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:56.844722 master-0 kubenswrapper[6980]: I0313 12:42:56.844712 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:42:59.844656 master-0 kubenswrapper[6980]: I0313 12:42:59.844559 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:42:59.845294 master-0 kubenswrapper[6980]: I0313 12:42:59.844672 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:02.033629 master-0 kubenswrapper[6980]: I0313 12:43:02.033567 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/1.log" Mar 13 12:43:02.034263 master-0 kubenswrapper[6980]: I0313 12:43:02.034086 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/0.log" Mar 13 12:43:02.034263 master-0 kubenswrapper[6980]: I0313 12:43:02.034123 6980 generic.go:334] "Generic (PLEG): container finished" podID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" containerID="47445303fded563085ce6c3a29cc03ab2ac1c4b6933c47fecf2b87970e86cfe3" exitCode=1 Mar 13 12:43:02.035623 master-0 kubenswrapper[6980]: I0313 12:43:02.035559 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-fcthv_3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/network-operator/0.log" Mar 13 12:43:02.035623 master-0 kubenswrapper[6980]: I0313 12:43:02.035615 6980 generic.go:334] "Generic (PLEG): container finished" podID="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" containerID="c15cc561a2dc2cb30249635a38f6de933793bd539f9b4fe8d60280e00e99d819" exitCode=255 Mar 13 12:43:02.844862 master-0 kubenswrapper[6980]: I0313 12:43:02.844768 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:02.844862 master-0 kubenswrapper[6980]: I0313 12:43:02.844859 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:04.916954 master-0 kubenswrapper[6980]: E0313 12:43:04.916866 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 12:43:04.917705 master-0 kubenswrapper[6980]: E0313 12:43:04.917108 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.026s" Mar 13 12:43:04.923869 master-0 kubenswrapper[6980]: I0313 12:43:04.923818 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:43:05.844722 master-0 kubenswrapper[6980]: I0313 12:43:05.844638 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:05.844722 master-0 kubenswrapper[6980]: I0313 12:43:05.844722 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:05.997718 master-0 kubenswrapper[6980]: E0313 12:43:05.997635 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:08.135546 master-0 kubenswrapper[6980]: E0313 12:43:08.135365 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{controller-manager-5ff9c7cb47-f4k6t.189c6704d9459298 openshift-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-controller-manager,Name:controller-manager-5ff9c7cb47-f4k6t,UID:7343df96-cba2-477b-8a1b-7af369620440,APIVersion:v1,ResourceVersion:7793,FieldPath:spec.containers{controller-manager},},Reason:Created,Message:Created container: controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:40:23.799878296 +0000 UTC m=+91.133872922,LastTimestamp:2026-03-13 12:40:23.799878296 +0000 UTC m=+91.133872922,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:43:08.845253 master-0 kubenswrapper[6980]: I0313 12:43:08.845152 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:08.845253 master-0 kubenswrapper[6980]: I0313 12:43:08.845242 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:11.845292 master-0 kubenswrapper[6980]: I0313 12:43:11.845139 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:11.846455 master-0 kubenswrapper[6980]: I0313 12:43:11.845303 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:14.844387 master-0 kubenswrapper[6980]: I0313 12:43:14.844292 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:14.844387 master-0 kubenswrapper[6980]: I0313 12:43:14.844385 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:17.844875 master-0 kubenswrapper[6980]: I0313 12:43:17.844775 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" start-of-body= Mar 13 12:43:17.845843 master-0 kubenswrapper[6980]: I0313 12:43:17.844934 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": dial tcp 10.128.0.26:8443: connect: connection refused" Mar 13 12:43:18.809014 master-0 kubenswrapper[6980]: I0313 12:43:18.808929 6980 status_manager.go:851] "Failed to get status for pod" podUID="7343df96-cba2-477b-8a1b-7af369620440" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods controller-manager-5ff9c7cb47-f4k6t)" Mar 13 12:43:19.032941 master-0 kubenswrapper[6980]: E0313 12:43:19.032887 6980 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="14.116s" Mar 13 12:43:19.062700 master-0 kubenswrapper[6980]: W0313 12:43:19.061131 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8226ffac_1f76_4eaa_ada5_056b5fd031b4.slice/crio-91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785 WatchSource:0}: Error finding container 91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785: Status 404 returned error can't find the container with id 91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785 Mar 13 12:43:19.063242 master-0 kubenswrapper[6980]: I0313 12:43:19.063160 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 12:43:19.066773 master-0 kubenswrapper[6980]: W0313 12:43:19.065552 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71b741d4_3899_4d31_afd1_72f5a9321f75.slice/crio-58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae WatchSource:0}: Error finding container 58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae: Status 404 returned error can't find the container with id 58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067243 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067353 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067395 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067411 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067431 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067444 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067457 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067466 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067476 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067488 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" event={"ID":"1929440f-f2cc-450d-80ff-ded6788baa74","Type":"ContainerDied","Data":"add6080be63d96ac6d15e6ae92fd130acd330b669019c0708be53e9f316105b4"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067527 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" event={"ID":"0d868028-9984-472a-8403-ffed767e1bf8","Type":"ContainerDied","Data":"8d3d7c80d1f091cb6801c4897cba8089f08217db69ec67d4a437f0167c034ba9"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067545 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerDied","Data":"cf479c6d6c1b4d3fb1e4d8c534df6ecd64180a47813aaab693ac30875cb0165f"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067559 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"1548830c5fd6aedb1c3d4d7d2384fdb131b3d8e72ab94a40c5ef20cdca9c52d5"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067593 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerDied","Data":"a96046bbc6e2f7a9efce1073fbf280ed5ef6a4fec79a22f6b7f77fdfe7b84349"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067611 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kb5r7" event={"ID":"cf580693-2931-4fef-adb5-b396f7303352","Type":"ContainerDied","Data":"bab02b7b0881c5a887bb7f5e343fcd3261971bd3b26625df2ad95a1d14f0e4fa"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067626 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerDied","Data":"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067642 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067663 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" event={"ID":"603fef71-e0cd-4617-bd8a-a55580578c2f","Type":"ContainerDied","Data":"a593c0e3cdcdc60e311759e5407d46a2222b3d9d443d63f109618c4b09858401"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067685 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerDied","Data":"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067704 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"e28983293a268eeb3a8d9ee62a01d7522220ca8af6806ced1e6376b80a8ffbde"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067723 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"63f3b75b31fa7fc52cd298f2c204c45e0576c862a52323ba1d17c643900efba4"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067742 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"aae10aa9-9c7d-4319-9829-e900af7df301","Type":"ContainerDied","Data":"4bc1f7c933d28f40b13d28985334ae240170d114b669b057cc93fee9fb9f7a73"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067757 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerDied","Data":"cc996817afafd2df7fd421372b8e47516fdf24cdaea627bf1268ff842055a746"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067771 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067790 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerDied","Data":"5568c74bf78103146825d0653ed59a230ea4678a37b99c81a8ff3d46062174bd"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067804 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" event={"ID":"eda319d8-825a-4881-96a9-5386b87f8a4f","Type":"ContainerDied","Data":"cbb2865534497635b5ca625e2074d592be0ad7241931d751a9044f1c282a4c0f"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067823 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerDied","Data":"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067837 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" event={"ID":"a8c840d1-8047-4ad6-a990-3ab119ae1cc5","Type":"ContainerDied","Data":"30b31c049d6bbc747c9d176a9321b53f132ec100e2bcb266f862f58f0efabb73"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067855 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" event={"ID":"1929440f-f2cc-450d-80ff-ded6788baa74","Type":"ContainerStarted","Data":"93bd012c71b9d847c2d4008d18932bd457200b5a03313038f5cebee7a0a7a684"} Mar 13 12:43:19.067840 master-0 kubenswrapper[6980]: I0313 12:43:19.067867 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" event={"ID":"a8c840d1-8047-4ad6-a990-3ab119ae1cc5","Type":"ContainerStarted","Data":"18b8b89352700ac705362d63b0b71d8367bc46c3021d67a1befbb89fe569fa78"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067878 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" event={"ID":"0d868028-9984-472a-8403-ffed767e1bf8","Type":"ContainerStarted","Data":"5d5f3aea563c4810ff9db9fa2c6c524f61755e7cebb39d6d38ca9f512f4101e0"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067895 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067906 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067918 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" event={"ID":"eda319d8-825a-4881-96a9-5386b87f8a4f","Type":"ContainerStarted","Data":"319ee7ad66e145ced987ee71e0eefe369bdfb8abae4d52aebbbdc37b95a29b33"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067930 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerStarted","Data":"710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067941 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-kb5r7" event={"ID":"cf580693-2931-4fef-adb5-b396f7303352","Type":"ContainerStarted","Data":"dd7f930178b541d4d1358be6d1eab803fe0e9f3a36358344bb575bbe4b1af5ad"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067953 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"47445303fded563085ce6c3a29cc03ab2ac1c4b6933c47fecf2b87970e86cfe3"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067964 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" event={"ID":"54c7efc1-6d89-4831-89d6-6f2812c36c36","Type":"ContainerStarted","Data":"f6baace682ab22b3676d77946fe91f53de263a7c9a6221f4520632faec476bce"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067976 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" event={"ID":"603fef71-e0cd-4617-bd8a-a55580578c2f","Type":"ContainerStarted","Data":"5191d3df8b0498b5307f4769b9774e6a86549e9ab571348f113fffca00125d49"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.067988 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"aae10aa9-9c7d-4319-9829-e900af7df301","Type":"ContainerDied","Data":"90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068011 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270" Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068036 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"fb14b7f25225651cce5060024dd96fe2745167fe14059c382213bb9bcb069656"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068047 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"1deefa2eed04097ebe852cdcfbe526eeadec29031bfced962671dccee87c51d9"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068058 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"6b009be90010b458906ee5384812043c64b344c57f3d33c0327bca957e554f6b"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068069 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"5174065d158bac4c4f8df59a6fd09da4b437cfcdb6c1e02c2fa3d32ae43403ab"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068080 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"aa937531213df9edca1f974017f8219d25e8981234f54f6bab6be21f0713fc0c"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068091 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerDied","Data":"8c953c8136772ca565e28cae4ca94f4cbf7b11aff2c6a974b20aeadfaf72a3c5"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068105 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerDied","Data":"e28983293a268eeb3a8d9ee62a01d7522220ca8af6806ced1e6376b80a8ffbde"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068117 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerDied","Data":"47445303fded563085ce6c3a29cc03ab2ac1c4b6933c47fecf2b87970e86cfe3"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068129 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" event={"ID":"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9","Type":"ContainerDied","Data":"c15cc561a2dc2cb30249635a38f6de933793bd539f9b4fe8d60280e00e99d819"} Mar 13 12:43:19.069724 master-0 kubenswrapper[6980]: I0313 12:43:19.068895 6980 scope.go:117] "RemoveContainer" containerID="c15cc561a2dc2cb30249635a38f6de933793bd539f9b4fe8d60280e00e99d819" Mar 13 12:43:19.070402 master-0 kubenswrapper[6980]: I0313 12:43:19.070245 6980 scope.go:117] "RemoveContainer" containerID="e28983293a268eeb3a8d9ee62a01d7522220ca8af6806ced1e6376b80a8ffbde" Mar 13 12:43:19.074677 master-0 kubenswrapper[6980]: W0313 12:43:19.074048 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59c9773d_7e88_4e30_9b8a_792a869a860e.slice/crio-c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01 WatchSource:0}: Error finding container c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01: Status 404 returned error can't find the container with id c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01 Mar 13 12:43:19.074677 master-0 kubenswrapper[6980]: I0313 12:43:19.074141 6980 scope.go:117] "RemoveContainer" containerID="e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" Mar 13 12:43:19.079227 master-0 kubenswrapper[6980]: I0313 12:43:19.075299 6980 scope.go:117] "RemoveContainer" containerID="8c953c8136772ca565e28cae4ca94f4cbf7b11aff2c6a974b20aeadfaf72a3c5" Mar 13 12:43:19.079227 master-0 kubenswrapper[6980]: I0313 12:43:19.075876 6980 scope.go:117] "RemoveContainer" containerID="47445303fded563085ce6c3a29cc03ab2ac1c4b6933c47fecf2b87970e86cfe3" Mar 13 12:43:19.079227 master-0 kubenswrapper[6980]: I0313 12:43:19.077436 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:43:19.079227 master-0 kubenswrapper[6980]: I0313 12:43:19.077484 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:43:19.079227 master-0 kubenswrapper[6980]: I0313 12:43:19.077537 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:43:19.083244 master-0 kubenswrapper[6980]: I0313 12:43:19.083085 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:43:19.108962 master-0 kubenswrapper[6980]: I0313 12:43:19.108866 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:43:19.109205 master-0 kubenswrapper[6980]: I0313 12:43:19.108940 6980 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="368e726e-0282-42c5-b979-a07ff09ede3c" Mar 13 12:43:19.112754 master-0 kubenswrapper[6980]: I0313 12:43:19.112690 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr"] Mar 13 12:43:19.131995 master-0 kubenswrapper[6980]: I0313 12:43:19.120165 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk"] Mar 13 12:43:19.131995 master-0 kubenswrapper[6980]: I0313 12:43:19.123897 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-ztpxf"] Mar 13 12:43:19.131995 master-0 kubenswrapper[6980]: I0313 12:43:19.124742 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h"] Mar 13 12:43:19.131995 master-0 kubenswrapper[6980]: I0313 12:43:19.127685 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h"] Mar 13 12:43:19.135746 master-0 kubenswrapper[6980]: I0313 12:43:19.133686 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:43:19.150456 master-0 kubenswrapper[6980]: I0313 12:43:19.150399 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 12:43:19.150456 master-0 kubenswrapper[6980]: I0313 12:43:19.150440 6980 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="368e726e-0282-42c5-b979-a07ff09ede3c" Mar 13 12:43:19.153999 master-0 kubenswrapper[6980]: I0313 12:43:19.153954 6980 scope.go:117] "RemoveContainer" containerID="24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" Mar 13 12:43:19.169034 master-0 kubenswrapper[6980]: I0313 12:43:19.168980 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-7wnld"] Mar 13 12:43:19.171497 master-0 kubenswrapper[6980]: I0313 12:43:19.171456 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:43:19.193611 master-0 kubenswrapper[6980]: I0313 12:43:19.193522 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 12:43:19.197872 master-0 kubenswrapper[6980]: I0313 12:43:19.197819 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:43:19.259200 master-0 kubenswrapper[6980]: I0313 12:43:19.257972 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 12:43:19.287406 master-0 kubenswrapper[6980]: I0313 12:43:19.284291 6980 scope.go:117] "RemoveContainer" containerID="e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" Mar 13 12:43:19.289512 master-0 kubenswrapper[6980]: E0313 12:43:19.289426 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5\": container with ID starting with e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5 not found: ID does not exist" containerID="e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" Mar 13 12:43:19.289633 master-0 kubenswrapper[6980]: I0313 12:43:19.289557 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5"} err="failed to get container status \"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5\": rpc error: code = NotFound desc = could not find container \"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5\": container with ID starting with e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5 not found: ID does not exist" Mar 13 12:43:19.289693 master-0 kubenswrapper[6980]: I0313 12:43:19.289643 6980 scope.go:117] "RemoveContainer" containerID="24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" Mar 13 12:43:19.290713 master-0 kubenswrapper[6980]: E0313 12:43:19.290336 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81\": container with ID starting with 24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81 not found: ID does not exist" containerID="24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" Mar 13 12:43:19.290713 master-0 kubenswrapper[6980]: I0313 12:43:19.290392 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81"} err="failed to get container status \"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81\": rpc error: code = NotFound desc = could not find container \"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81\": container with ID starting with 24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81 not found: ID does not exist" Mar 13 12:43:19.290713 master-0 kubenswrapper[6980]: I0313 12:43:19.290422 6980 scope.go:117] "RemoveContainer" containerID="9db6288a98029b0a09c12d8d262b41839cd5c5aa57fa3824b78834e64ca0ee2e" Mar 13 12:43:19.341421 master-0 kubenswrapper[6980]: I0313 12:43:19.341374 6980 scope.go:117] "RemoveContainer" containerID="59914b16ce26e359fa0f8c879d562000e5c33058f6a9e4b5ad9002af5b9b5469" Mar 13 12:43:19.398441 master-0 kubenswrapper[6980]: I0313 12:43:19.398334 6980 scope.go:117] "RemoveContainer" containerID="6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e" Mar 13 12:43:19.436883 master-0 kubenswrapper[6980]: I0313 12:43:19.436765 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/2.log" Mar 13 12:43:19.440327 master-0 kubenswrapper[6980]: I0313 12:43:19.440294 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" event={"ID":"20217cff-2f81-4a56-9c15-28385c19258c","Type":"ContainerStarted","Data":"bd48d4fa30aeda024af9d88b2a92ab9f3ad6a982cbd20ba4d8bca985b63c0b34"} Mar 13 12:43:19.451346 master-0 kubenswrapper[6980]: I0313 12:43:19.451230 6980 scope.go:117] "RemoveContainer" containerID="e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5" Mar 13 12:43:19.452049 master-0 kubenswrapper[6980]: I0313 12:43:19.452005 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5"} err="failed to get container status \"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5\": rpc error: code = NotFound desc = could not find container \"e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5\": container with ID starting with e7b3e6831ba5c35ef8dbf455eefe2ae6e3d9a5d03976e6ca054330100c38b9e5 not found: ID does not exist" Mar 13 12:43:19.452049 master-0 kubenswrapper[6980]: I0313 12:43:19.452041 6980 scope.go:117] "RemoveContainer" containerID="24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81" Mar 13 12:43:19.452211 master-0 kubenswrapper[6980]: I0313 12:43:19.452118 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" event={"ID":"71b741d4-3899-4d31-afd1-72f5a9321f75","Type":"ContainerStarted","Data":"58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae"} Mar 13 12:43:19.452903 master-0 kubenswrapper[6980]: I0313 12:43:19.452852 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81"} err="failed to get container status \"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81\": rpc error: code = NotFound desc = could not find container \"24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81\": container with ID starting with 24edba7f5497c0972e734212e06dc818d38a4af1b5a8060724515b3996c83d81 not found: ID does not exist" Mar 13 12:43:19.452903 master-0 kubenswrapper[6980]: I0313 12:43:19.452888 6980 scope.go:117] "RemoveContainer" containerID="6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e" Mar 13 12:43:19.453328 master-0 kubenswrapper[6980]: E0313 12:43:19.453289 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e\": container with ID starting with 6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e not found: ID does not exist" containerID="6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e" Mar 13 12:43:19.453328 master-0 kubenswrapper[6980]: I0313 12:43:19.453320 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e"} err="failed to get container status \"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e\": rpc error: code = NotFound desc = could not find container \"6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e\": container with ID starting with 6b4cba65853c79cd0db84c1fde81031516da371bcf468442e2f554a62a2f446e not found: ID does not exist" Mar 13 12:43:19.454789 master-0 kubenswrapper[6980]: I0313 12:43:19.453737 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ztpxf" event={"ID":"59c9773d-7e88-4e30-9b8a-792a869a860e","Type":"ContainerStarted","Data":"c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01"} Mar 13 12:43:19.458307 master-0 kubenswrapper[6980]: I0313 12:43:19.458220 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" event={"ID":"8226ffac-1f76-4eaa-ada5-056b5fd031b4","Type":"ContainerStarted","Data":"91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785"} Mar 13 12:43:19.499316 master-0 kubenswrapper[6980]: I0313 12:43:19.496126 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerStarted","Data":"310bb063b58a9159851ef88dd90cde60bf53039832d7c07feba8d470bdfa8768"} Mar 13 12:43:19.512658 master-0 kubenswrapper[6980]: I0313 12:43:19.512537 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerStarted","Data":"cef5b900e1661977211454ffc9aaadd8fa1b91ab51948137171cbc32a2dba7c7"} Mar 13 12:43:19.528322 master-0 kubenswrapper[6980]: I0313 12:43:19.527633 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/1.log" Mar 13 12:43:19.532016 master-0 kubenswrapper[6980]: I0313 12:43:19.530145 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" event={"ID":"2a5976df-0366-47b3-bc54-1ba7c249e87c","Type":"ContainerStarted","Data":"4a1a1ca2f1f627a9edd53099939af120013911bcf17806e1f6a21cd1517caec4"} Mar 13 12:43:19.581013 master-0 kubenswrapper[6980]: I0313 12:43:19.580888 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" podStartSLOduration=182.640205546 podStartE2EDuration="3m8.580840297s" podCreationTimestamp="2026-03-13 12:40:11 +0000 UTC" firstStartedPulling="2026-03-13 12:40:17.762853202 +0000 UTC m=+85.096847828" lastFinishedPulling="2026-03-13 12:40:23.703487943 +0000 UTC m=+91.037482579" observedRunningTime="2026-03-13 12:43:19.558652817 +0000 UTC m=+266.892647443" watchObservedRunningTime="2026-03-13 12:43:19.580840297 +0000 UTC m=+266.914834923" Mar 13 12:43:20.137321 master-0 kubenswrapper[6980]: I0313 12:43:20.137209 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:20.547265 master-0 kubenswrapper[6980]: I0313 12:43:20.547125 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerStarted","Data":"2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b"} Mar 13 12:43:20.549526 master-0 kubenswrapper[6980]: I0313 12:43:20.549465 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/2.log" Mar 13 12:43:20.550487 master-0 kubenswrapper[6980]: I0313 12:43:20.550404 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31"} Mar 13 12:43:20.551458 master-0 kubenswrapper[6980]: I0313 12:43:20.550887 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:20.558553 master-0 kubenswrapper[6980]: I0313 12:43:20.554065 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" event={"ID":"20217cff-2f81-4a56-9c15-28385c19258c","Type":"ContainerStarted","Data":"9cabc57acb4ba9fddcf21487355b840ba262f8187b22cc0097b7e1e99913616e"} Mar 13 12:43:20.558553 master-0 kubenswrapper[6980]: I0313 12:43:20.556914 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-fcthv_3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/network-operator/0.log" Mar 13 12:43:20.558553 master-0 kubenswrapper[6980]: I0313 12:43:20.556969 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" event={"ID":"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9","Type":"ContainerStarted","Data":"ed65bfdbca8dfa16e925c5ff32ec035401319be4e748458e555d2595822efda4"} Mar 13 12:43:20.561139 master-0 kubenswrapper[6980]: I0313 12:43:20.561078 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/1.log" Mar 13 12:43:20.561755 master-0 kubenswrapper[6980]: I0313 12:43:20.561710 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c"} Mar 13 12:43:20.868150 master-0 kubenswrapper[6980]: I0313 12:43:20.868008 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="642c9e64-2d6f-4f0a-babf-8a54e0002415" path="/var/lib/kubelet/pods/642c9e64-2d6f-4f0a-babf-8a54e0002415/volumes" Mar 13 12:43:20.868793 master-0 kubenswrapper[6980]: I0313 12:43:20.868763 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" path="/var/lib/kubelet/pods/f52d50d6-44fd-47d2-bca6-77be37c69694/volumes" Mar 13 12:43:21.175143 master-0 kubenswrapper[6980]: I0313 12:43:21.174450 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:43:21.175143 master-0 kubenswrapper[6980]: I0313 12:43:21.174649 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:43:21.197691 master-0 kubenswrapper[6980]: I0313 12:43:21.197630 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:43:21.857305 master-0 kubenswrapper[6980]: I0313 12:43:21.857236 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:23.138756 master-0 kubenswrapper[6980]: I0313 12:43:23.138595 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:23.602705 master-0 kubenswrapper[6980]: I0313 12:43:23.599131 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" event={"ID":"71b741d4-3899-4d31-afd1-72f5a9321f75","Type":"ContainerStarted","Data":"0aefcd44866fb4e23ce3a3a813bc2adb85a41d9f8f9fd29f58e743ed124130e6"} Mar 13 12:43:23.627436 master-0 kubenswrapper[6980]: I0313 12:43:23.616075 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerStarted","Data":"712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51"} Mar 13 12:43:23.627436 master-0 kubenswrapper[6980]: I0313 12:43:23.617118 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:43:23.627436 master-0 kubenswrapper[6980]: I0313 12:43:23.625667 6980 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-7wnld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Mar 13 12:43:23.627436 master-0 kubenswrapper[6980]: I0313 12:43:23.625745 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" podUID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Mar 13 12:43:23.637393 master-0 kubenswrapper[6980]: I0313 12:43:23.632351 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ztpxf" event={"ID":"59c9773d-7e88-4e30-9b8a-792a869a860e","Type":"ContainerStarted","Data":"b9fec39b45862887bf4cb008d72cc631ee11a40c1c441bbf05b846a152e2cd33"} Mar 13 12:43:24.641388 master-0 kubenswrapper[6980]: I0313 12:43:24.641310 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-ztpxf" event={"ID":"59c9773d-7e88-4e30-9b8a-792a869a860e","Type":"ContainerStarted","Data":"ba79554bc39bc3ce4357ae0315b5ca4a9bb95a3395629dd85d2798eb573a7359"} Mar 13 12:43:24.644922 master-0 kubenswrapper[6980]: I0313 12:43:24.644877 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerStarted","Data":"a77c66d0bbef5ac4ba841e64d029a75b81101530693d755adf73cb234d47aa31"} Mar 13 12:43:24.645025 master-0 kubenswrapper[6980]: I0313 12:43:24.644946 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerStarted","Data":"5c411b542b6c604fb634e20ec1667bd444b32f47270e3ec6baff792160a18f75"} Mar 13 12:43:24.649979 master-0 kubenswrapper[6980]: I0313 12:43:24.649932 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:43:24.737349 master-0 kubenswrapper[6980]: I0313 12:43:24.737265 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:24.737349 master-0 kubenswrapper[6980]: I0313 12:43:24.737347 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:24.844803 master-0 kubenswrapper[6980]: I0313 12:43:24.844538 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:24.844803 master-0 kubenswrapper[6980]: I0313 12:43:24.844781 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:26.191628 master-0 kubenswrapper[6980]: I0313 12:43:26.191483 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:43:27.662326 master-0 kubenswrapper[6980]: I0313 12:43:27.662088 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" event={"ID":"8226ffac-1f76-4eaa-ada5-056b5fd031b4","Type":"ContainerStarted","Data":"dad87379d4a88d3f668bbf98aa0d155e05bf12821c47f7b097df1eb97274eb24"} Mar 13 12:43:27.662326 master-0 kubenswrapper[6980]: I0313 12:43:27.662190 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:43:27.665606 master-0 kubenswrapper[6980]: I0313 12:43:27.663493 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" event={"ID":"20217cff-2f81-4a56-9c15-28385c19258c","Type":"ContainerStarted","Data":"f380cb6aa96691042a8cede3619ef1bcaa412985b21e3cadd6963fc297c7968d"} Mar 13 12:43:27.665606 master-0 kubenswrapper[6980]: I0313 12:43:27.663875 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:43:27.665606 master-0 kubenswrapper[6980]: I0313 12:43:27.665236 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" event={"ID":"2a5976df-0366-47b3-bc54-1ba7c249e87c","Type":"ContainerStarted","Data":"a10b29c39c9d49700122a76cf097e2f81482898ba6c50faaac016bd1351292a1"} Mar 13 12:43:27.665606 master-0 kubenswrapper[6980]: I0313 12:43:27.665446 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:43:27.669234 master-0 kubenswrapper[6980]: I0313 12:43:27.669193 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:43:27.670200 master-0 kubenswrapper[6980]: I0313 12:43:27.670163 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:43:27.738246 master-0 kubenswrapper[6980]: I0313 12:43:27.738183 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:27.738606 master-0 kubenswrapper[6980]: I0313 12:43:27.738552 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:27.844493 master-0 kubenswrapper[6980]: I0313 12:43:27.844396 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:27.844783 master-0 kubenswrapper[6980]: I0313 12:43:27.844532 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:29.904302 master-0 kubenswrapper[6980]: I0313 12:43:29.904201 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:29.905100 master-0 kubenswrapper[6980]: I0313 12:43:29.904326 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:30.737587 master-0 kubenswrapper[6980]: I0313 12:43:30.737488 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:30.737890 master-0 kubenswrapper[6980]: I0313 12:43:30.737627 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:30.737890 master-0 kubenswrapper[6980]: I0313 12:43:30.737695 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:30.738439 master-0 kubenswrapper[6980]: I0313 12:43:30.738403 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 12:43:30.738519 master-0 kubenswrapper[6980]: I0313 12:43:30.738476 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" containerID="cri-o://05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31" gracePeriod=30 Mar 13 12:43:30.844608 master-0 kubenswrapper[6980]: I0313 12:43:30.844511 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:30.844892 master-0 kubenswrapper[6980]: I0313 12:43:30.844657 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:30.919759 master-0 kubenswrapper[6980]: E0313 12:43:30.919687 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" is forbidden: the server was unable to return a response in the time allotted, but may still be processing the request (get limitranges)" pod="openshift-etcd/etcd-master-0" Mar 13 12:43:31.845950 master-0 kubenswrapper[6980]: I0313 12:43:31.845852 6980 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-tml9z container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 12:43:31.846248 master-0 kubenswrapper[6980]: I0313 12:43:31.845981 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" podUID="edde8919-104a-4f05-8e21-46787f706bed" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.26:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:43:31.869602 master-0 kubenswrapper[6980]: I0313 12:43:31.868887 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:31.883861 master-0 kubenswrapper[6980]: I0313 12:43:31.883767 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:43:32.689226 master-0 kubenswrapper[6980]: I0313 12:43:32.688671 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/1.log" Mar 13 12:43:32.689841 master-0 kubenswrapper[6980]: I0313 12:43:32.689316 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c" exitCode=255 Mar 13 12:43:32.689841 master-0 kubenswrapper[6980]: I0313 12:43:32.689387 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerDied","Data":"35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c"} Mar 13 12:43:32.689841 master-0 kubenswrapper[6980]: I0313 12:43:32.689475 6980 scope.go:117] "RemoveContainer" containerID="cf479c6d6c1b4d3fb1e4d8c534df6ecd64180a47813aaab693ac30875cb0165f" Mar 13 12:43:32.690004 master-0 kubenswrapper[6980]: I0313 12:43:32.689898 6980 scope.go:117] "RemoveContainer" containerID="35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c" Mar 13 12:43:32.690178 master-0 kubenswrapper[6980]: E0313 12:43:32.690143 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:43:32.691757 master-0 kubenswrapper[6980]: I0313 12:43:32.691697 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/3.log" Mar 13 12:43:32.692276 master-0 kubenswrapper[6980]: I0313 12:43:32.692246 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/2.log" Mar 13 12:43:32.693451 master-0 kubenswrapper[6980]: I0313 12:43:32.692668 6980 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31" exitCode=255 Mar 13 12:43:32.693451 master-0 kubenswrapper[6980]: I0313 12:43:32.692708 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerDied","Data":"05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31"} Mar 13 12:43:32.693451 master-0 kubenswrapper[6980]: I0313 12:43:32.692803 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" event={"ID":"edde8919-104a-4f05-8e21-46787f706bed","Type":"ContainerStarted","Data":"0136833ad9cdb6adaff9bfaf2c0c19dca9f13bc4f29642527d8ad39de5751038"} Mar 13 12:43:32.693719 master-0 kubenswrapper[6980]: I0313 12:43:32.693684 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:32.716335 master-0 kubenswrapper[6980]: I0313 12:43:32.716063 6980 scope.go:117] "RemoveContainer" containerID="e28983293a268eeb3a8d9ee62a01d7522220ca8af6806ced1e6376b80a8ffbde" Mar 13 12:43:33.699612 master-0 kubenswrapper[6980]: I0313 12:43:33.699536 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/1.log" Mar 13 12:43:33.701392 master-0 kubenswrapper[6980]: I0313 12:43:33.701359 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/3.log" Mar 13 12:43:35.543883 master-0 kubenswrapper[6980]: I0313 12:43:35.543804 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:43:35.848223 master-0 kubenswrapper[6980]: I0313 12:43:35.848070 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:43:35.876968 master-0 kubenswrapper[6980]: I0313 12:43:35.876870 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.876817166 podStartE2EDuration="876.817166ms" podCreationTimestamp="2026-03-13 12:43:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:43:35.870651718 +0000 UTC m=+283.204646354" watchObservedRunningTime="2026-03-13 12:43:35.876817166 +0000 UTC m=+283.210811792" Mar 13 12:43:38.903126 master-0 kubenswrapper[6980]: I0313 12:43:38.902945 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:43:38.903715 master-0 kubenswrapper[6980]: I0313 12:43:38.903680 6980 scope.go:117] "RemoveContainer" containerID="35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c" Mar 13 12:43:38.903986 master-0 kubenswrapper[6980]: E0313 12:43:38.903951 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:43:42.191372 master-0 kubenswrapper[6980]: I0313 12:43:42.191300 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6vng8"] Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: E0313 12:43:42.191542 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191592 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: E0313 12:43:42.191610 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191616 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: E0313 12:43:42.191626 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191632 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: E0313 12:43:42.191640 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191646 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: E0313 12:43:42.191658 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="642c9e64-2d6f-4f0a-babf-8a54e0002415" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191664 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="642c9e64-2d6f-4f0a-babf-8a54e0002415" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191793 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="642c9e64-2d6f-4f0a-babf-8a54e0002415" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191824 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191837 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f52d50d6-44fd-47d2-bca6-77be37c69694" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191850 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:43:42.192411 master-0 kubenswrapper[6980]: I0313 12:43:42.191861 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:43:42.193083 master-0 kubenswrapper[6980]: I0313 12:43:42.192887 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.195862 master-0 kubenswrapper[6980]: I0313 12:43:42.195803 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4f2vw" Mar 13 12:43:42.203051 master-0 kubenswrapper[6980]: I0313 12:43:42.202997 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vng8"] Mar 13 12:43:42.318118 master-0 kubenswrapper[6980]: I0313 12:43:42.318046 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pr995\" (UniqueName: \"kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.318118 master-0 kubenswrapper[6980]: I0313 12:43:42.318107 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.318412 master-0 kubenswrapper[6980]: I0313 12:43:42.318141 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.384176 master-0 kubenswrapper[6980]: I0313 12:43:42.384119 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6w8hd"] Mar 13 12:43:42.385182 master-0 kubenswrapper[6980]: I0313 12:43:42.385153 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.387637 master-0 kubenswrapper[6980]: I0313 12:43:42.387596 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-78fwj" Mar 13 12:43:42.395852 master-0 kubenswrapper[6980]: I0313 12:43:42.395774 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w8hd"] Mar 13 12:43:42.436781 master-0 kubenswrapper[6980]: I0313 12:43:42.419257 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr995\" (UniqueName: \"kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.436781 master-0 kubenswrapper[6980]: I0313 12:43:42.419313 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.436781 master-0 kubenswrapper[6980]: I0313 12:43:42.419348 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.436781 master-0 kubenswrapper[6980]: I0313 12:43:42.420034 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.436781 master-0 kubenswrapper[6980]: I0313 12:43:42.420087 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.457907 master-0 kubenswrapper[6980]: I0313 12:43:42.457779 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr995\" (UniqueName: \"kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.514881 master-0 kubenswrapper[6980]: I0313 12:43:42.514785 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:43:42.521150 master-0 kubenswrapper[6980]: I0313 12:43:42.520777 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djchk\" (UniqueName: \"kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.521150 master-0 kubenswrapper[6980]: I0313 12:43:42.521002 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.521150 master-0 kubenswrapper[6980]: I0313 12:43:42.521031 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.626161 master-0 kubenswrapper[6980]: I0313 12:43:42.621804 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.626161 master-0 kubenswrapper[6980]: I0313 12:43:42.621852 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.626161 master-0 kubenswrapper[6980]: I0313 12:43:42.621885 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djchk\" (UniqueName: \"kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.626161 master-0 kubenswrapper[6980]: I0313 12:43:42.622523 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.626161 master-0 kubenswrapper[6980]: I0313 12:43:42.622643 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.646172 master-0 kubenswrapper[6980]: I0313 12:43:42.646097 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djchk\" (UniqueName: \"kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.738907 master-0 kubenswrapper[6980]: I0313 12:43:42.738710 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:43:42.939940 master-0 kubenswrapper[6980]: I0313 12:43:42.939886 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6vng8"] Mar 13 12:43:42.947431 master-0 kubenswrapper[6980]: W0313 12:43:42.947339 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf9f90f5_643f_41e8_a886_7d19fb064afc.slice/crio-a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162 WatchSource:0}: Error finding container a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162: Status 404 returned error can't find the container with id a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162 Mar 13 12:43:43.147349 master-0 kubenswrapper[6980]: I0313 12:43:43.147288 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6w8hd"] Mar 13 12:43:43.158002 master-0 kubenswrapper[6980]: W0313 12:43:43.157938 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6a9184d_0557_4e61_bf31_6dd69c0dfb15.slice/crio-d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300 WatchSource:0}: Error finding container d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300: Status 404 returned error can't find the container with id d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300 Mar 13 12:43:43.753277 master-0 kubenswrapper[6980]: I0313 12:43:43.753209 6980 generic.go:334] "Generic (PLEG): container finished" podID="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" containerID="950288614d40d58ade55b88b69f7304031ba8ba32f85625e94af8a858ab168fc" exitCode=0 Mar 13 12:43:43.753890 master-0 kubenswrapper[6980]: I0313 12:43:43.753307 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w8hd" event={"ID":"b6a9184d-0557-4e61-bf31-6dd69c0dfb15","Type":"ContainerDied","Data":"950288614d40d58ade55b88b69f7304031ba8ba32f85625e94af8a858ab168fc"} Mar 13 12:43:43.753890 master-0 kubenswrapper[6980]: I0313 12:43:43.753336 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w8hd" event={"ID":"b6a9184d-0557-4e61-bf31-6dd69c0dfb15","Type":"ContainerStarted","Data":"d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300"} Mar 13 12:43:43.755050 master-0 kubenswrapper[6980]: I0313 12:43:43.755018 6980 generic.go:334] "Generic (PLEG): container finished" podID="cf9f90f5-643f-41e8-a886-7d19fb064afc" containerID="091751b8e7d456cdc0a088c29fc232cb40bb6927c85d77df8b3128a26c86c4c6" exitCode=0 Mar 13 12:43:43.755109 master-0 kubenswrapper[6980]: I0313 12:43:43.755056 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vng8" event={"ID":"cf9f90f5-643f-41e8-a886-7d19fb064afc","Type":"ContainerDied","Data":"091751b8e7d456cdc0a088c29fc232cb40bb6927c85d77df8b3128a26c86c4c6"} Mar 13 12:43:43.755109 master-0 kubenswrapper[6980]: I0313 12:43:43.755078 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vng8" event={"ID":"cf9f90f5-643f-41e8-a886-7d19fb064afc","Type":"ContainerStarted","Data":"a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162"} Mar 13 12:43:43.786671 master-0 kubenswrapper[6980]: I0313 12:43:43.786598 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-92rsn"] Mar 13 12:43:43.790020 master-0 kubenswrapper[6980]: I0313 12:43:43.789976 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:43.792439 master-0 kubenswrapper[6980]: I0313 12:43:43.792405 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7gls2" Mar 13 12:43:43.796532 master-0 kubenswrapper[6980]: I0313 12:43:43.796485 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92rsn"] Mar 13 12:43:43.936132 master-0 kubenswrapper[6980]: I0313 12:43:43.936032 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vt8r\" (UniqueName: \"kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:43.936132 master-0 kubenswrapper[6980]: I0313 12:43:43.936149 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:43.936493 master-0 kubenswrapper[6980]: I0313 12:43:43.936226 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.041713 master-0 kubenswrapper[6980]: I0313 12:43:44.041540 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.041713 master-0 kubenswrapper[6980]: I0313 12:43:44.041681 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vt8r\" (UniqueName: \"kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.042002 master-0 kubenswrapper[6980]: I0313 12:43:44.041752 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.043389 master-0 kubenswrapper[6980]: I0313 12:43:44.042346 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.043389 master-0 kubenswrapper[6980]: I0313 12:43:44.042698 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.060812 master-0 kubenswrapper[6980]: I0313 12:43:44.060767 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vt8r\" (UniqueName: \"kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.110728 master-0 kubenswrapper[6980]: I0313 12:43:44.110636 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:43:44.506754 master-0 kubenswrapper[6980]: I0313 12:43:44.503980 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:43:44.506754 master-0 kubenswrapper[6980]: I0313 12:43:44.504596 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.507642 master-0 kubenswrapper[6980]: I0313 12:43:44.507615 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-f7gfd" Mar 13 12:43:44.507813 master-0 kubenswrapper[6980]: I0313 12:43:44.507766 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 12:43:44.523170 master-0 kubenswrapper[6980]: I0313 12:43:44.522984 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:43:44.544140 master-0 kubenswrapper[6980]: I0313 12:43:44.544069 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-92rsn"] Mar 13 12:43:44.546635 master-0 kubenswrapper[6980]: I0313 12:43:44.545908 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.546635 master-0 kubenswrapper[6980]: I0313 12:43:44.546022 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.546635 master-0 kubenswrapper[6980]: I0313 12:43:44.546104 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.553608 master-0 kubenswrapper[6980]: W0313 12:43:44.553516 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod730e1f43_39b7_41de_ac81_270966725477.slice/crio-72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917 WatchSource:0}: Error finding container 72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917: Status 404 returned error can't find the container with id 72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917 Mar 13 12:43:44.647227 master-0 kubenswrapper[6980]: I0313 12:43:44.647169 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.647533 master-0 kubenswrapper[6980]: I0313 12:43:44.647240 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.647533 master-0 kubenswrapper[6980]: I0313 12:43:44.647376 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.647533 master-0 kubenswrapper[6980]: I0313 12:43:44.647423 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.647533 master-0 kubenswrapper[6980]: I0313 12:43:44.647479 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.668246 master-0 kubenswrapper[6980]: I0313 12:43:44.668191 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access\") pod \"installer-2-master-0\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.762023 master-0 kubenswrapper[6980]: I0313 12:43:44.761888 6980 generic.go:334] "Generic (PLEG): container finished" podID="730e1f43-39b7-41de-ac81-270966725477" containerID="399693364fe1a370d24538cb2bf5708b63dd362b46742194b9a96b63a3d6deaf" exitCode=0 Mar 13 12:43:44.762023 master-0 kubenswrapper[6980]: I0313 12:43:44.761951 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92rsn" event={"ID":"730e1f43-39b7-41de-ac81-270966725477","Type":"ContainerDied","Data":"399693364fe1a370d24538cb2bf5708b63dd362b46742194b9a96b63a3d6deaf"} Mar 13 12:43:44.762023 master-0 kubenswrapper[6980]: I0313 12:43:44.761987 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92rsn" event={"ID":"730e1f43-39b7-41de-ac81-270966725477","Type":"ContainerStarted","Data":"72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917"} Mar 13 12:43:44.846080 master-0 kubenswrapper[6980]: I0313 12:43:44.845991 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:43:44.994614 master-0 kubenswrapper[6980]: I0313 12:43:44.994524 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-28fdg"] Mar 13 12:43:44.996194 master-0 kubenswrapper[6980]: I0313 12:43:44.996143 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:44.998853 master-0 kubenswrapper[6980]: I0313 12:43:44.998807 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28fdg"] Mar 13 12:43:45.000228 master-0 kubenswrapper[6980]: I0313 12:43:45.000186 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6slw7" Mar 13 12:43:45.153919 master-0 kubenswrapper[6980]: I0313 12:43:45.153855 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tpt\" (UniqueName: \"kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.154659 master-0 kubenswrapper[6980]: I0313 12:43:45.153937 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.154659 master-0 kubenswrapper[6980]: I0313 12:43:45.153996 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.247448 master-0 kubenswrapper[6980]: I0313 12:43:45.246917 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 12:43:45.255919 master-0 kubenswrapper[6980]: I0313 12:43:45.255874 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9tpt\" (UniqueName: \"kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.256108 master-0 kubenswrapper[6980]: I0313 12:43:45.256091 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.256188 master-0 kubenswrapper[6980]: I0313 12:43:45.256132 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.258484 master-0 kubenswrapper[6980]: I0313 12:43:45.256799 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.258484 master-0 kubenswrapper[6980]: I0313 12:43:45.256841 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.259643 master-0 kubenswrapper[6980]: W0313 12:43:45.258827 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podf2ae954b_a362_4cd1_8e54_c4aedcf30a00.slice/crio-2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6 WatchSource:0}: Error finding container 2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6: Status 404 returned error can't find the container with id 2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6 Mar 13 12:43:45.279478 master-0 kubenswrapper[6980]: I0313 12:43:45.279414 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9tpt\" (UniqueName: \"kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.321091 master-0 kubenswrapper[6980]: I0313 12:43:45.320966 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:43:45.760355 master-0 kubenswrapper[6980]: I0313 12:43:45.760296 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-28fdg"] Mar 13 12:43:45.778609 master-0 kubenswrapper[6980]: I0313 12:43:45.778555 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"f2ae954b-a362-4cd1-8e54-c4aedcf30a00","Type":"ContainerStarted","Data":"0a5b3570a3db3335c8eec162d41987493203c31e437d042c22accb68c0ffa63a"} Mar 13 12:43:45.779067 master-0 kubenswrapper[6980]: I0313 12:43:45.778615 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"f2ae954b-a362-4cd1-8e54-c4aedcf30a00","Type":"ContainerStarted","Data":"2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6"} Mar 13 12:43:45.797841 master-0 kubenswrapper[6980]: I0313 12:43:45.797739 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=1.797708042 podStartE2EDuration="1.797708042s" podCreationTimestamp="2026-03-13 12:43:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:43:45.793850044 +0000 UTC m=+293.127844700" watchObservedRunningTime="2026-03-13 12:43:45.797708042 +0000 UTC m=+293.131702688" Mar 13 12:43:46.801335 master-0 kubenswrapper[6980]: I0313 12:43:46.801283 6980 generic.go:334] "Generic (PLEG): container finished" podID="5623ea13-a34b-4510-8902-341912d115df" containerID="afcd89fe0d1290aaaef3733e8919ef539e12266d0a9c01b2e1c115fd05956b73" exitCode=0 Mar 13 12:43:46.801335 master-0 kubenswrapper[6980]: I0313 12:43:46.801431 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fdg" event={"ID":"5623ea13-a34b-4510-8902-341912d115df","Type":"ContainerDied","Data":"afcd89fe0d1290aaaef3733e8919ef539e12266d0a9c01b2e1c115fd05956b73"} Mar 13 12:43:46.801335 master-0 kubenswrapper[6980]: I0313 12:43:46.801653 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fdg" event={"ID":"5623ea13-a34b-4510-8902-341912d115df","Type":"ContainerStarted","Data":"7a24aff88a2b33793c90602bd0f46317c68b5e2becc49d106f2e8cd82fff29f4"} Mar 13 12:43:50.056641 master-0 kubenswrapper[6980]: I0313 12:43:50.055887 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7"] Mar 13 12:43:50.057418 master-0 kubenswrapper[6980]: I0313 12:43:50.056796 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7"] Mar 13 12:43:50.057418 master-0 kubenswrapper[6980]: I0313 12:43:50.057361 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.061767 master-0 kubenswrapper[6980]: I0313 12:43:50.057812 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.061767 master-0 kubenswrapper[6980]: I0313 12:43:50.061569 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:43:50.062154 master-0 kubenswrapper[6980]: I0313 12:43:50.061817 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dh2b7" Mar 13 12:43:50.062154 master-0 kubenswrapper[6980]: I0313 12:43:50.061959 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:43:50.062755 master-0 kubenswrapper[6980]: I0313 12:43:50.062395 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:43:50.062848 master-0 kubenswrapper[6980]: I0313 12:43:50.062768 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:43:50.062848 master-0 kubenswrapper[6980]: I0313 12:43:50.062816 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:43:50.064610 master-0 kubenswrapper[6980]: I0313 12:43:50.064528 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:43:50.065780 master-0 kubenswrapper[6980]: I0313 12:43:50.064728 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:43:50.065780 master-0 kubenswrapper[6980]: I0313 12:43:50.064888 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:43:50.065780 master-0 kubenswrapper[6980]: I0313 12:43:50.065010 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:43:50.065780 master-0 kubenswrapper[6980]: I0313 12:43:50.065132 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xv4qd" Mar 13 12:43:50.065780 master-0 kubenswrapper[6980]: I0313 12:43:50.065292 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:43:50.070683 master-0 kubenswrapper[6980]: I0313 12:43:50.070656 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg"] Mar 13 12:43:50.070921 master-0 kubenswrapper[6980]: I0313 12:43:50.070894 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" podUID="e3eb38e0-d8b5-46fc-809d-73791d569816" containerName="cluster-version-operator" containerID="cri-o://4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58" gracePeriod=130 Mar 13 12:43:50.160649 master-0 kubenswrapper[6980]: I0313 12:43:50.159769 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-s4gd8"] Mar 13 12:43:50.160649 master-0 kubenswrapper[6980]: I0313 12:43:50.160420 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.165072 master-0 kubenswrapper[6980]: I0313 12:43:50.162820 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:43:50.172620 master-0 kubenswrapper[6980]: I0313 12:43:50.169389 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz"] Mar 13 12:43:50.172620 master-0 kubenswrapper[6980]: I0313 12:43:50.170529 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.172853 master-0 kubenswrapper[6980]: I0313 12:43:50.172674 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.172853 master-0 kubenswrapper[6980]: I0313 12:43:50.172791 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.172923 master-0 kubenswrapper[6980]: I0313 12:43:50.172857 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.172923 master-0 kubenswrapper[6980]: I0313 12:43:50.172891 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjdl6\" (UniqueName: \"kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.172987 master-0 kubenswrapper[6980]: I0313 12:43:50.172952 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msvdh\" (UniqueName: \"kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.173183 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.173238 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.173269 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.173300 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.174934 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.175120 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.175241 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kcbnp" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.175387 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.175513 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.175685 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2"] Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.176572 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xg9t5" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.176747 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.176859 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.176863 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:43:50.178377 master-0 kubenswrapper[6980]: I0313 12:43:50.177059 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:43:50.193747 master-0 kubenswrapper[6980]: I0313 12:43:50.186412 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:43:50.193747 master-0 kubenswrapper[6980]: I0313 12:43:50.186646 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:43:50.193747 master-0 kubenswrapper[6980]: I0313 12:43:50.186762 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:43:50.193747 master-0 kubenswrapper[6980]: I0313 12:43:50.187266 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:43:50.193747 master-0 kubenswrapper[6980]: I0313 12:43:50.187371 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577"] Mar 13 12:43:50.226988 master-0 kubenswrapper[6980]: I0313 12:43:50.225994 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gft2f" Mar 13 12:43:50.316449 master-0 kubenswrapper[6980]: I0313 12:43:50.316194 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.318972 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.319292 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.319385 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.319691 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjdl6\" (UniqueName: \"kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.319745 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.319966 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg5p4\" (UniqueName: \"kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.320008 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.323599 master-0 kubenswrapper[6980]: I0313 12:43:50.320071 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msvdh\" (UniqueName: \"kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.324073 master-0 kubenswrapper[6980]: I0313 12:43:50.323923 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cscxl\" (UniqueName: \"kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325383 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325469 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325516 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325599 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325651 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325681 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325711 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.325759 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s7cgb\" (UniqueName: \"kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.329651 master-0 kubenswrapper[6980]: I0313 12:43:50.326237 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws"] Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.365246 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.366113 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.366335 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.366565 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.366772 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9n5pq" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.366954 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.367741 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz"] Mar 13 12:43:50.368950 master-0 kubenswrapper[6980]: I0313 12:43:50.367845 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.372424 master-0 kubenswrapper[6980]: I0313 12:43:50.370504 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.372424 master-0 kubenswrapper[6980]: I0313 12:43:50.371235 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-5lcmq" Mar 13 12:43:50.372424 master-0 kubenswrapper[6980]: I0313 12:43:50.371461 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:43:50.373210 master-0 kubenswrapper[6980]: I0313 12:43:50.373139 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577"] Mar 13 12:43:50.379791 master-0 kubenswrapper[6980]: I0313 12:43:50.379690 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.380462 master-0 kubenswrapper[6980]: I0313 12:43:50.380130 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:43:50.380763 master-0 kubenswrapper[6980]: I0313 12:43:50.380721 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:43:50.380997 master-0 kubenswrapper[6980]: I0313 12:43:50.380957 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:43:50.381263 master-0 kubenswrapper[6980]: I0313 12:43:50.381224 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msvdh\" (UniqueName: \"kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.381809 master-0 kubenswrapper[6980]: I0313 12:43:50.345254 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.381864 master-0 kubenswrapper[6980]: I0313 12:43:50.381839 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.381898 master-0 kubenswrapper[6980]: I0313 12:43:50.381869 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.381932 master-0 kubenswrapper[6980]: I0313 12:43:50.381919 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lm4d2\" (UniqueName: \"kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.381960 master-0 kubenswrapper[6980]: I0313 12:43:50.381946 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.384713 master-0 kubenswrapper[6980]: I0313 12:43:50.384439 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-s4gd8"] Mar 13 12:43:50.390738 master-0 kubenswrapper[6980]: I0313 12:43:50.386891 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws"] Mar 13 12:43:50.391033 master-0 kubenswrapper[6980]: I0313 12:43:50.391000 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config\") pod \"machine-approver-955fcfb87-s85h7\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.391283 master-0 kubenswrapper[6980]: I0313 12:43:50.391230 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:43:50.402108 master-0 kubenswrapper[6980]: I0313 12:43:50.400695 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2"] Mar 13 12:43:50.424438 master-0 kubenswrapper[6980]: I0313 12:43:50.423352 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r"] Mar 13 12:43:50.428388 master-0 kubenswrapper[6980]: I0313 12:43:50.426294 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh"] Mar 13 12:43:50.428388 master-0 kubenswrapper[6980]: I0313 12:43:50.426738 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.428388 master-0 kubenswrapper[6980]: I0313 12:43:50.427208 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.438903 master-0 kubenswrapper[6980]: I0313 12:43:50.438288 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:43:50.438903 master-0 kubenswrapper[6980]: I0313 12:43:50.438561 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:43:50.438903 master-0 kubenswrapper[6980]: I0313 12:43:50.438713 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-46jst" Mar 13 12:43:50.438903 master-0 kubenswrapper[6980]: I0313 12:43:50.438820 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:43:50.439215 master-0 kubenswrapper[6980]: I0313 12:43:50.438927 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:43:50.439215 master-0 kubenswrapper[6980]: I0313 12:43:50.439010 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7fzhf" Mar 13 12:43:50.439215 master-0 kubenswrapper[6980]: I0313 12:43:50.439041 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:43:50.439215 master-0 kubenswrapper[6980]: I0313 12:43:50.439199 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:43:50.439338 master-0 kubenswrapper[6980]: I0313 12:43:50.439278 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:43:50.461621 master-0 kubenswrapper[6980]: I0313 12:43:50.458115 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjdl6\" (UniqueName: \"kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6\") pod \"cluster-cloud-controller-manager-operator-559568b945-jdwm7\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.473605 master-0 kubenswrapper[6980]: I0313 12:43:50.470871 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz"] Mar 13 12:43:50.473605 master-0 kubenswrapper[6980]: I0313 12:43:50.471669 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.483312 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv"] Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.484277 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.488314 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.488925 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-l78bb" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.489228 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.489538 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-cjs56" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.489897 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.490181 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.490345 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.490688 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492710 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7cgb\" (UniqueName: \"kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492743 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492788 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492816 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2894g\" (UniqueName: \"kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492855 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492882 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4d2\" (UniqueName: \"kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492903 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492928 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492953 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492970 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5p4\" (UniqueName: \"kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.492987 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.493016 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscxl\" (UniqueName: \"kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.493034 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.493052 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.493819 master-0 kubenswrapper[6980]: I0313 12:43:50.493070 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.500318 master-0 kubenswrapper[6980]: I0313 12:43:50.494387 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.500318 master-0 kubenswrapper[6980]: I0313 12:43:50.495229 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.500318 master-0 kubenswrapper[6980]: I0313 12:43:50.495684 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.500318 master-0 kubenswrapper[6980]: I0313 12:43:50.499035 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r"] Mar 13 12:43:50.500318 master-0 kubenswrapper[6980]: I0313 12:43:50.499045 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.509110 master-0 kubenswrapper[6980]: I0313 12:43:50.501306 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.509110 master-0 kubenswrapper[6980]: I0313 12:43:50.501426 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz"] Mar 13 12:43:50.527146 master-0 kubenswrapper[6980]: I0313 12:43:50.519149 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.527146 master-0 kubenswrapper[6980]: I0313 12:43:50.525478 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.540251 master-0 kubenswrapper[6980]: I0313 12:43:50.538268 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv"] Mar 13 12:43:50.540251 master-0 kubenswrapper[6980]: I0313 12:43:50.539524 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5p4\" (UniqueName: \"kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.540251 master-0 kubenswrapper[6980]: I0313 12:43:50.539633 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscxl\" (UniqueName: \"kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.544005 master-0 kubenswrapper[6980]: I0313 12:43:50.543951 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.555289 master-0 kubenswrapper[6980]: I0313 12:43:50.554941 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4d2\" (UniqueName: \"kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.557163 master-0 kubenswrapper[6980]: I0313 12:43:50.556384 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2"] Mar 13 12:43:50.562241 master-0 kubenswrapper[6980]: I0313 12:43:50.562195 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.563600 master-0 kubenswrapper[6980]: I0313 12:43:50.563320 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7cgb\" (UniqueName: \"kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.567447 master-0 kubenswrapper[6980]: I0313 12:43:50.565363 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:43:50.567447 master-0 kubenswrapper[6980]: I0313 12:43:50.566076 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:43:50.569693 master-0 kubenswrapper[6980]: I0313 12:43:50.568967 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jwq7f" Mar 13 12:43:50.573473 master-0 kubenswrapper[6980]: I0313 12:43:50.572239 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.581904 master-0 kubenswrapper[6980]: I0313 12:43:50.578155 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:43:50.588094 master-0 kubenswrapper[6980]: I0313 12:43:50.587948 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2"] Mar 13 12:43:50.594535 master-0 kubenswrapper[6980]: I0313 12:43:50.594466 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.594535 master-0 kubenswrapper[6980]: I0313 12:43:50.594529 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.594820 master-0 kubenswrapper[6980]: I0313 12:43:50.594599 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55v4q\" (UniqueName: \"kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.594820 master-0 kubenswrapper[6980]: I0313 12:43:50.594633 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.594820 master-0 kubenswrapper[6980]: I0313 12:43:50.594715 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqhcp\" (UniqueName: \"kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.594820 master-0 kubenswrapper[6980]: I0313 12:43:50.594750 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2894g\" (UniqueName: \"kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.594820 master-0 kubenswrapper[6980]: I0313 12:43:50.594777 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.595220 master-0 kubenswrapper[6980]: I0313 12:43:50.595166 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv745\" (UniqueName: \"kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.595335 master-0 kubenswrapper[6980]: I0313 12:43:50.595305 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.595412 master-0 kubenswrapper[6980]: I0313 12:43:50.595380 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.595551 master-0 kubenswrapper[6980]: I0313 12:43:50.595524 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.595651 master-0 kubenswrapper[6980]: I0313 12:43:50.595568 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvn5d\" (UniqueName: \"kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.595925 master-0 kubenswrapper[6980]: I0313 12:43:50.595757 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.595925 master-0 kubenswrapper[6980]: I0313 12:43:50.595850 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.595925 master-0 kubenswrapper[6980]: I0313 12:43:50.595891 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.596257 master-0 kubenswrapper[6980]: I0313 12:43:50.596220 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.596316 master-0 kubenswrapper[6980]: I0313 12:43:50.596280 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.611687 master-0 kubenswrapper[6980]: I0313 12:43:50.609090 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.638597 master-0 kubenswrapper[6980]: I0313 12:43:50.638532 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh"] Mar 13 12:43:50.648960 master-0 kubenswrapper[6980]: I0313 12:43:50.648823 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2894g\" (UniqueName: \"kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.659143 master-0 kubenswrapper[6980]: I0313 12:43:50.659026 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:43:50.705539 master-0 kubenswrapper[6980]: I0313 12:43:50.705474 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhcp\" (UniqueName: \"kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.705719 master-0 kubenswrapper[6980]: I0313 12:43:50.705562 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv745\" (UniqueName: \"kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.705719 master-0 kubenswrapper[6980]: I0313 12:43:50.705630 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.705719 master-0 kubenswrapper[6980]: I0313 12:43:50.705672 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705719 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705753 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705780 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn5d\" (UniqueName: \"kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705825 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705852 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.705891 master-0 kubenswrapper[6980]: I0313 12:43:50.705901 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.705927 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwjh6\" (UniqueName: \"kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.705973 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706006 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706073 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706105 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706135 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706159 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706215 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55v4q\" (UniqueName: \"kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.706315 master-0 kubenswrapper[6980]: I0313 12:43:50.706245 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.710063 master-0 kubenswrapper[6980]: I0313 12:43:50.710028 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.711249 master-0 kubenswrapper[6980]: I0313 12:43:50.711214 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.713506 master-0 kubenswrapper[6980]: I0313 12:43:50.713467 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.714785 master-0 kubenswrapper[6980]: I0313 12:43:50.714489 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:43:50.715849 master-0 kubenswrapper[6980]: I0313 12:43:50.715819 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.730562 master-0 kubenswrapper[6980]: I0313 12:43:50.716512 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.736296 master-0 kubenswrapper[6980]: I0313 12:43:50.721925 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.745344 master-0 kubenswrapper[6980]: I0313 12:43:50.739376 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55v4q\" (UniqueName: \"kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.745557 master-0 kubenswrapper[6980]: I0313 12:43:50.739399 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:43:50.762107 master-0 kubenswrapper[6980]: I0313 12:43:50.761776 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhcp\" (UniqueName: \"kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.764550 master-0 kubenswrapper[6980]: I0313 12:43:50.764513 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.765813 master-0 kubenswrapper[6980]: I0313 12:43:50.765788 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.766276 master-0 kubenswrapper[6980]: I0313 12:43:50.765864 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.766438 master-0 kubenswrapper[6980]: I0313 12:43:50.766402 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.766511 master-0 kubenswrapper[6980]: I0313 12:43:50.766481 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:50.767040 master-0 kubenswrapper[6980]: I0313 12:43:50.766729 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv745\" (UniqueName: \"kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:50.770115 master-0 kubenswrapper[6980]: I0313 12:43:50.769971 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn5d\" (UniqueName: \"kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.805106 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.813881 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") pod \"e3eb38e0-d8b5-46fc-809d-73791d569816\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.813954 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") pod \"e3eb38e0-d8b5-46fc-809d-73791d569816\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.813990 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") pod \"e3eb38e0-d8b5-46fc-809d-73791d569816\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814084 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") pod \"e3eb38e0-d8b5-46fc-809d-73791d569816\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814145 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") pod \"e3eb38e0-d8b5-46fc-809d-73791d569816\" (UID: \"e3eb38e0-d8b5-46fc-809d-73791d569816\") " Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814326 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814379 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814418 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjh6\" (UniqueName: \"kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814466 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.814775 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca" (OuterVolumeSpecName: "service-ca") pod "e3eb38e0-d8b5-46fc-809d-73791d569816" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.815819 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "e3eb38e0-d8b5-46fc-809d-73791d569816" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.815896 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "e3eb38e0-d8b5-46fc-809d-73791d569816" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.818131 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.822198 master-0 kubenswrapper[6980]: I0313 12:43:50.820331 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.829674 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.832450 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e3eb38e0-d8b5-46fc-809d-73791d569816" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.836099 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e3eb38e0-d8b5-46fc-809d-73791d569816" (UID: "e3eb38e0-d8b5-46fc-809d-73791d569816"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.842053 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjh6\" (UniqueName: \"kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.843340 6980 generic.go:334] "Generic (PLEG): container finished" podID="e3eb38e0-d8b5-46fc-809d-73791d569816" containerID="4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58" exitCode=0 Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.843405 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" event={"ID":"e3eb38e0-d8b5-46fc-809d-73791d569816","Type":"ContainerDied","Data":"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58"} Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.843433 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" event={"ID":"e3eb38e0-d8b5-46fc-809d-73791d569816","Type":"ContainerDied","Data":"725739dce256aec84d1d35f08a2c0ef0a4d6fb2169686aeff14675d6012d989b"} Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.843452 6980 scope.go:117] "RemoveContainer" containerID="4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58" Mar 13 12:43:50.865521 master-0 kubenswrapper[6980]: I0313 12:43:50.843587 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.909193 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.918209 6980 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.918241 6980 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e3eb38e0-d8b5-46fc-809d-73791d569816-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.918251 6980 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3eb38e0-d8b5-46fc-809d-73791d569816-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.918259 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e3eb38e0-d8b5-46fc-809d-73791d569816-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:43:50.938745 master-0 kubenswrapper[6980]: I0313 12:43:50.918270 6980 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/e3eb38e0-d8b5-46fc-809d-73791d569816-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 13 12:43:50.948067 master-0 kubenswrapper[6980]: I0313 12:43:50.947726 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:43:50.968909 master-0 kubenswrapper[6980]: I0313 12:43:50.962951 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg"] Mar 13 12:43:50.968909 master-0 kubenswrapper[6980]: I0313 12:43:50.962983 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerStarted","Data":"6486369e198d9165d92d001391330ff9555e2b759135e809935c5fac0c0b0171"} Mar 13 12:43:50.968909 master-0 kubenswrapper[6980]: I0313 12:43:50.963006 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-7rfrg"] Mar 13 12:43:50.968909 master-0 kubenswrapper[6980]: I0313 12:43:50.963023 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerStarted","Data":"52951c4c44dc81befb3c2b4f3b24d955875d716dca38e58955a268051a926b8b"} Mar 13 12:43:50.983820 master-0 kubenswrapper[6980]: I0313 12:43:50.978564 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:43:51.010875 master-0 kubenswrapper[6980]: I0313 12:43:51.004099 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56"] Mar 13 12:43:51.010875 master-0 kubenswrapper[6980]: E0313 12:43:51.004547 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e3eb38e0-d8b5-46fc-809d-73791d569816" containerName="cluster-version-operator" Mar 13 12:43:51.010875 master-0 kubenswrapper[6980]: I0313 12:43:51.004563 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="e3eb38e0-d8b5-46fc-809d-73791d569816" containerName="cluster-version-operator" Mar 13 12:43:51.010875 master-0 kubenswrapper[6980]: I0313 12:43:51.005010 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3eb38e0-d8b5-46fc-809d-73791d569816" containerName="cluster-version-operator" Mar 13 12:43:51.013927 master-0 kubenswrapper[6980]: I0313 12:43:51.011694 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.017012 master-0 kubenswrapper[6980]: I0313 12:43:51.016915 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:43:51.096709 master-0 kubenswrapper[6980]: I0313 12:43:51.093757 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:51.096709 master-0 kubenswrapper[6980]: I0313 12:43:51.093924 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:43:51.096709 master-0 kubenswrapper[6980]: I0313 12:43:51.068865 6980 scope.go:117] "RemoveContainer" containerID="4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58" Mar 13 12:43:51.125337 master-0 kubenswrapper[6980]: I0313 12:43:51.098308 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:43:51.125337 master-0 kubenswrapper[6980]: I0313 12:43:51.098619 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:43:51.125337 master-0 kubenswrapper[6980]: I0313 12:43:51.104119 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:43:51.125337 master-0 kubenswrapper[6980]: E0313 12:43:51.110900 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58\": container with ID starting with 4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58 not found: ID does not exist" containerID="4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58" Mar 13 12:43:51.125337 master-0 kubenswrapper[6980]: I0313 12:43:51.110979 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58"} err="failed to get container status \"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58\": rpc error: code = NotFound desc = could not find container \"4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58\": container with ID starting with 4f9c8587ba5a76cd0ca773bf81451daadd1211aee549256bd8398f457064ac58 not found: ID does not exist" Mar 13 12:43:51.142471 master-0 kubenswrapper[6980]: I0313 12:43:51.139476 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-2tphk" Mar 13 12:43:51.195813 master-0 kubenswrapper[6980]: I0313 12:43:51.195538 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.195813 master-0 kubenswrapper[6980]: I0313 12:43:51.195673 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.195813 master-0 kubenswrapper[6980]: I0313 12:43:51.195710 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.195813 master-0 kubenswrapper[6980]: I0313 12:43:51.195780 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.195813 master-0 kubenswrapper[6980]: I0313 12:43:51.195818 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.260520 master-0 kubenswrapper[6980]: I0313 12:43:51.259173 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2"] Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.297175 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.297270 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.297297 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.297329 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.297347 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.298257 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.298301 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.301165 master-0 kubenswrapper[6980]: I0313 12:43:51.298329 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.315742 master-0 kubenswrapper[6980]: I0313 12:43:51.311292 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.332513 master-0 kubenswrapper[6980]: I0313 12:43:51.332471 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.374150 master-0 kubenswrapper[6980]: I0313 12:43:51.373206 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:43:51.412392 master-0 kubenswrapper[6980]: I0313 12:43:51.411010 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577"] Mar 13 12:43:51.456094 master-0 kubenswrapper[6980]: W0313 12:43:51.454688 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31442e1e_3f42_4dba_82d5_08e5f8d29a58.slice/crio-923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0 WatchSource:0}: Error finding container 923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0: Status 404 returned error can't find the container with id 923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0 Mar 13 12:43:51.722792 master-0 kubenswrapper[6980]: I0313 12:43:51.722388 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r"] Mar 13 12:43:51.723900 master-0 kubenswrapper[6980]: W0313 12:43:51.723855 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode0763043_3813_43b6_9618_b2d15c942edb.slice/crio-cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3 WatchSource:0}: Error finding container cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3: Status 404 returned error can't find the container with id cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3 Mar 13 12:43:51.729749 master-0 kubenswrapper[6980]: I0313 12:43:51.727449 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-s4gd8"] Mar 13 12:43:51.749880 master-0 kubenswrapper[6980]: I0313 12:43:51.749838 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz"] Mar 13 12:43:51.871441 master-0 kubenswrapper[6980]: I0313 12:43:51.871397 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz"] Mar 13 12:43:51.949795 master-0 kubenswrapper[6980]: I0313 12:43:51.949747 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh"] Mar 13 12:43:51.950787 master-0 kubenswrapper[6980]: W0313 12:43:51.950740 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6cf4e65_37ac_4c8c_98dd_1c86ca7997f2.slice/crio-01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8 WatchSource:0}: Error finding container 01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8: Status 404 returned error can't find the container with id 01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8 Mar 13 12:43:51.953692 master-0 kubenswrapper[6980]: I0313 12:43:51.953478 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" event={"ID":"6592aa5b-4a50-40f6-80a5-87e3c547018d","Type":"ContainerStarted","Data":"5a0142e38f1fead1e1ace0c76fc676540eac64737a730f1859f051c85babda93"} Mar 13 12:43:51.953692 master-0 kubenswrapper[6980]: I0313 12:43:51.953523 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" event={"ID":"6592aa5b-4a50-40f6-80a5-87e3c547018d","Type":"ContainerStarted","Data":"d30550f78f634355f75a46b81834746cb5b11fa2ba553146cdee3bed2ae12ebf"} Mar 13 12:43:51.958196 master-0 kubenswrapper[6980]: W0313 12:43:51.957310 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03758d96_5a20_4cba_92e0_47f5b1a3e558.slice/crio-b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70 WatchSource:0}: Error finding container b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70: Status 404 returned error can't find the container with id b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70 Mar 13 12:43:51.961242 master-0 kubenswrapper[6980]: I0313 12:43:51.960976 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws"] Mar 13 12:43:51.961726 master-0 kubenswrapper[6980]: I0313 12:43:51.961679 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"8c72f4222c0466238ecef6497355ca369f8bfcd600621df230959caf510fb4c4"} Mar 13 12:43:51.964005 master-0 kubenswrapper[6980]: I0313 12:43:51.963974 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" event={"ID":"dc1c9136-80e1-4736-8959-cc1e58aee26e","Type":"ContainerStarted","Data":"3d19c9aac2a8b462f3074f3033f67c6974a524b9c316a3fe3a6786af1aa0eae0"} Mar 13 12:43:51.967304 master-0 kubenswrapper[6980]: I0313 12:43:51.964150 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" event={"ID":"dc1c9136-80e1-4736-8959-cc1e58aee26e","Type":"ContainerStarted","Data":"824d5e18774211ffd65269e6c76a79cffc7294bc9b558c91abfddb9b02e76444"} Mar 13 12:43:51.967304 master-0 kubenswrapper[6980]: I0313 12:43:51.966055 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv"] Mar 13 12:43:51.967304 master-0 kubenswrapper[6980]: I0313 12:43:51.966561 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3"} Mar 13 12:43:51.967304 master-0 kubenswrapper[6980]: I0313 12:43:51.967143 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2"] Mar 13 12:43:51.970272 master-0 kubenswrapper[6980]: I0313 12:43:51.970054 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" event={"ID":"31442e1e-3f42-4dba-82d5-08e5f8d29a58","Type":"ContainerStarted","Data":"7045a9d0952a350541bb989b261cf903d093bd57eb604a89714df96299e21f3d"} Mar 13 12:43:51.970272 master-0 kubenswrapper[6980]: I0313 12:43:51.970103 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" event={"ID":"31442e1e-3f42-4dba-82d5-08e5f8d29a58","Type":"ContainerStarted","Data":"923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0"} Mar 13 12:43:51.980129 master-0 kubenswrapper[6980]: I0313 12:43:51.977933 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerStarted","Data":"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6"} Mar 13 12:43:51.980788 master-0 kubenswrapper[6980]: I0313 12:43:51.980745 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" event={"ID":"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75","Type":"ContainerStarted","Data":"cce07379c81de2caa56b921b64dd3ee63be30f56bcec066d326de0a8f136d5b8"} Mar 13 12:43:52.020079 master-0 kubenswrapper[6980]: I0313 12:43:51.989373 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" podStartSLOduration=1.989348355 podStartE2EDuration="1.989348355s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:43:51.984330711 +0000 UTC m=+299.318325337" watchObservedRunningTime="2026-03-13 12:43:51.989348355 +0000 UTC m=+299.323342981" Mar 13 12:43:52.020079 master-0 kubenswrapper[6980]: I0313 12:43:52.001740 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" event={"ID":"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c","Type":"ContainerStarted","Data":"72c8417a873fd1b85ceced7f871125b403b5b588edc21a1d386d6970721625a8"} Mar 13 12:43:52.866397 master-0 kubenswrapper[6980]: I0313 12:43:52.864867 6980 scope.go:117] "RemoveContainer" containerID="35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.018100 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3eb38e0-d8b5-46fc-809d-73791d569816" path="/var/lib/kubelet/pods/e3eb38e0-d8b5-46fc-809d-73791d569816/volumes" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.030014 6980 scope.go:117] "RemoveContainer" containerID="9f4ddd8b81aa8e6f6453e9d79c9c9826152b36b58f325733cabc91a77b93f83c" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.042038 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.043255 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.049756 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-phtzh" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.051802 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.057128 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerStarted","Data":"01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8"} Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.063638 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.122374 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.122431 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.122571 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.223972 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.224064 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.224098 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.224171 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.224486 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.339021 master-0 kubenswrapper[6980]: I0313 12:43:53.286602 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.358244 master-0 kubenswrapper[6980]: I0313 12:43:53.353108 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" event={"ID":"03758d96-5a20-4cba-92e0-47f5b1a3e558","Type":"ContainerStarted","Data":"a5f0a5976e4b4a6d2fb7f24d7ff7611e986318248ed1e0f92cd35cd57ee4d039"} Mar 13 12:43:53.358244 master-0 kubenswrapper[6980]: I0313 12:43:53.353192 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" event={"ID":"03758d96-5a20-4cba-92e0-47f5b1a3e558","Type":"ContainerStarted","Data":"b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70"} Mar 13 12:43:53.363740 master-0 kubenswrapper[6980]: I0313 12:43:53.362569 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" event={"ID":"14eb83e7-c436-4f10-8cba-29e09a7036a8","Type":"ContainerStarted","Data":"b460a513aa98981ff4ecc3db99816d3699c4d9e87f4c27cd49ee7b8d607d181a"} Mar 13 12:43:53.363740 master-0 kubenswrapper[6980]: I0313 12:43:53.362628 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" event={"ID":"14eb83e7-c436-4f10-8cba-29e09a7036a8","Type":"ContainerStarted","Data":"bbeecd971cdd646524d03ce263cd9cc7322a03fe74a470dadcf157b0f31ed1cb"} Mar 13 12:43:53.363740 master-0 kubenswrapper[6980]: I0313 12:43:53.362639 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" event={"ID":"14eb83e7-c436-4f10-8cba-29e09a7036a8","Type":"ContainerStarted","Data":"adad8c1ef5c4b589ed8b1cb34f6484ca79dbaffdd4f786714ba25a8f28ac7eaf"} Mar 13 12:43:53.367715 master-0 kubenswrapper[6980]: I0313 12:43:53.367654 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" event={"ID":"90c6474d-44a1-4164-a85b-6de0525dc656","Type":"ContainerStarted","Data":"bfbb802e1cd22717431053508e7a9ab47190f18b34a81d8d738c68d99cbbc52f"} Mar 13 12:43:53.367715 master-0 kubenswrapper[6980]: I0313 12:43:53.367706 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" event={"ID":"90c6474d-44a1-4164-a85b-6de0525dc656","Type":"ContainerStarted","Data":"99e9d3fc7152ff7bfdbd97007d95913bd72cfac57cdb379fde935a1b0b89854a"} Mar 13 12:43:53.367940 master-0 kubenswrapper[6980]: I0313 12:43:53.367840 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:53.386438 master-0 kubenswrapper[6980]: I0313 12:43:53.386376 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:43:53.934725 master-0 kubenswrapper[6980]: I0313 12:43:53.934517 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" podStartSLOduration=3.9344847080000003 podStartE2EDuration="3.934484708s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:43:53.933093916 +0000 UTC m=+301.267088562" watchObservedRunningTime="2026-03-13 12:43:53.934484708 +0000 UTC m=+301.268479334" Mar 13 12:43:53.972952 master-0 kubenswrapper[6980]: I0313 12:43:53.972189 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" podStartSLOduration=3.972154772 podStartE2EDuration="3.972154772s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:43:53.968678395 +0000 UTC m=+301.302673021" watchObservedRunningTime="2026-03-13 12:43:53.972154772 +0000 UTC m=+301.306149418" Mar 13 12:43:54.077734 master-0 kubenswrapper[6980]: I0313 12:43:54.077475 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:43:55.539887 master-0 kubenswrapper[6980]: I0313 12:43:55.539247 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-mlgxw"] Mar 13 12:43:55.543121 master-0 kubenswrapper[6980]: I0313 12:43:55.543078 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:55.548775 master-0 kubenswrapper[6980]: I0313 12:43:55.548025 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fbzjs" Mar 13 12:43:55.548775 master-0 kubenswrapper[6980]: I0313 12:43:55.548561 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:43:56.163160 master-0 kubenswrapper[6980]: I0313 12:43:56.160500 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4m68d\" (UniqueName: \"kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.163160 master-0 kubenswrapper[6980]: I0313 12:43:56.160628 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.163160 master-0 kubenswrapper[6980]: I0313 12:43:56.160691 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.163160 master-0 kubenswrapper[6980]: I0313 12:43:56.160734 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.387016 master-0 kubenswrapper[6980]: I0313 12:43:56.386771 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m68d\" (UniqueName: \"kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.387016 master-0 kubenswrapper[6980]: I0313 12:43:56.386875 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.387016 master-0 kubenswrapper[6980]: I0313 12:43:56.386921 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.387016 master-0 kubenswrapper[6980]: I0313 12:43:56.386961 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.388004 master-0 kubenswrapper[6980]: I0313 12:43:56.387902 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.390602 master-0 kubenswrapper[6980]: I0313 12:43:56.388998 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.397497 master-0 kubenswrapper[6980]: I0313 12:43:56.397408 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.441566 master-0 kubenswrapper[6980]: I0313 12:43:56.441279 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m68d\" (UniqueName: \"kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:43:56.478112 master-0 kubenswrapper[6980]: I0313 12:43:56.478055 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:44:00.899431 master-0 kubenswrapper[6980]: I0313 12:44:00.899377 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:44:09.306244 master-0 kubenswrapper[6980]: I0313 12:44:09.306162 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7"] Mar 13 12:44:15.364524 master-0 kubenswrapper[6980]: I0313 12:44:15.364423 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7"] Mar 13 12:44:16.703068 master-0 kubenswrapper[6980]: I0313 12:44:16.702815 6980 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:44:16.704004 master-0 kubenswrapper[6980]: I0313 12:44:16.703506 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://1deefa2eed04097ebe852cdcfbe526eeadec29031bfced962671dccee87c51d9" gracePeriod=30 Mar 13 12:44:16.704004 master-0 kubenswrapper[6980]: I0313 12:44:16.703557 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://6b009be90010b458906ee5384812043c64b344c57f3d33c0327bca957e554f6b" gracePeriod=30 Mar 13 12:44:16.704004 master-0 kubenswrapper[6980]: I0313 12:44:16.703477 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://aa937531213df9edca1f974017f8219d25e8981234f54f6bab6be21f0713fc0c" gracePeriod=30 Mar 13 12:44:16.704004 master-0 kubenswrapper[6980]: I0313 12:44:16.703488 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://5174065d158bac4c4f8df59a6fd09da4b437cfcdb6c1e02c2fa3d32ae43403ab" gracePeriod=30 Mar 13 12:44:16.704004 master-0 kubenswrapper[6980]: I0313 12:44:16.703829 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://fb14b7f25225651cce5060024dd96fe2745167fe14059c382213bb9bcb069656" gracePeriod=30 Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.713605 6980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714648 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714675 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714689 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714698 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714709 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714724 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714753 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714764 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714787 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714795 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714807 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714815 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714841 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714851 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: E0313 12:44:16.714869 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.714915 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.715739 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.715765 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.715781 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.715799 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 12:44:16.717716 master-0 kubenswrapper[6980]: I0313 12:44:16.715818 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 12:44:16.773418 master-0 kubenswrapper[6980]: I0313 12:44:16.773364 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.773418 master-0 kubenswrapper[6980]: I0313 12:44:16.773427 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.773745 master-0 kubenswrapper[6980]: I0313 12:44:16.773458 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.773745 master-0 kubenswrapper[6980]: I0313 12:44:16.773505 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.773745 master-0 kubenswrapper[6980]: I0313 12:44:16.773539 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.773745 master-0 kubenswrapper[6980]: I0313 12:44:16.773563 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875211 master-0 kubenswrapper[6980]: I0313 12:44:16.875152 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875211 master-0 kubenswrapper[6980]: I0313 12:44:16.875207 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875230 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875265 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875289 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875320 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875435 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875470 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875494 master-0 kubenswrapper[6980]: I0313 12:44:16.875488 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875777 master-0 kubenswrapper[6980]: I0313 12:44:16.875510 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875777 master-0 kubenswrapper[6980]: I0313 12:44:16.875618 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:16.875777 master-0 kubenswrapper[6980]: I0313 12:44:16.875663 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:44:25.911632 master-0 kubenswrapper[6980]: I0313 12:44:25.911553 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:44:25.912639 master-0 kubenswrapper[6980]: I0313 12:44:25.912501 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:44:25.914069 master-0 kubenswrapper[6980]: I0313 12:44:25.914019 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="aa937531213df9edca1f974017f8219d25e8981234f54f6bab6be21f0713fc0c" exitCode=2 Mar 13 12:44:25.914069 master-0 kubenswrapper[6980]: I0313 12:44:25.914049 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="fb14b7f25225651cce5060024dd96fe2745167fe14059c382213bb9bcb069656" exitCode=0 Mar 13 12:44:25.914069 master-0 kubenswrapper[6980]: I0313 12:44:25.914057 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="1deefa2eed04097ebe852cdcfbe526eeadec29031bfced962671dccee87c51d9" exitCode=2 Mar 13 12:44:25.915608 master-0 kubenswrapper[6980]: I0313 12:44:25.915550 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/1.log" Mar 13 12:44:25.915726 master-0 kubenswrapper[6980]: I0313 12:44:25.915610 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8"} Mar 13 12:44:26.262196 master-0 kubenswrapper[6980]: W0313 12:44:26.262134 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8d83309_58b2_40af_ab48_1f8b9aeffefb.slice/crio-ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b WatchSource:0}: Error finding container ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b: Status 404 returned error can't find the container with id ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b Mar 13 12:44:26.924701 master-0 kubenswrapper[6980]: I0313 12:44:26.924593 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" event={"ID":"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75","Type":"ContainerStarted","Data":"ae10169a9e119350b2a96f5309d12915f51070e5dccf4f0f5176eb1b5ea4a702"} Mar 13 12:44:26.926066 master-0 kubenswrapper[6980]: I0313 12:44:26.926023 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" event={"ID":"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c","Type":"ContainerStarted","Data":"4b979fe9ad4ecdc2d069b6ce73e8e3c7409e5037596b738f1d3183fda733f95f"} Mar 13 12:44:26.928163 master-0 kubenswrapper[6980]: I0313 12:44:26.928122 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" event={"ID":"6592aa5b-4a50-40f6-80a5-87e3c547018d","Type":"ContainerStarted","Data":"9479622bd85ef4f03c20b6f431211da5638ac13b80fa002a4924e9e47ebdf224"} Mar 13 12:44:26.931633 master-0 kubenswrapper[6980]: I0313 12:44:26.931597 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"e43d4ab25b60b182304f9a1ec5ee5cd5ceec1bb5ed341ff305cc05db8e8062fd"} Mar 13 12:44:26.933881 master-0 kubenswrapper[6980]: I0313 12:44:26.933813 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vng8" event={"ID":"cf9f90f5-643f-41e8-a886-7d19fb064afc","Type":"ContainerStarted","Data":"0d07ff19cee22aeb65be4aac439ca164cb1cde9e958fa7bfb90a8bc5b4af437e"} Mar 13 12:44:26.935673 master-0 kubenswrapper[6980]: I0313 12:44:26.935613 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c"} Mar 13 12:44:26.935673 master-0 kubenswrapper[6980]: I0313 12:44:26.935665 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b"} Mar 13 12:44:27.986895 master-0 kubenswrapper[6980]: I0313 12:44:27.986790 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"ac0c6b91c4cfd8dd096e8fcdb75e83d044846df3f66334b7a9bd9b8b1443715b"} Mar 13 12:44:27.990771 master-0 kubenswrapper[6980]: I0313 12:44:27.990722 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" event={"ID":"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c","Type":"ContainerStarted","Data":"27b5a4a1b0a0b68ecb573ed790ac529aef989d332f7523b3216a9f838f9f59bc"} Mar 13 12:44:27.993137 master-0 kubenswrapper[6980]: I0313 12:44:27.993061 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"325a312bdd5655125848695bf9ff7bb2b0934ae3b7bbc8f5febd7f2f02b8ee68"} Mar 13 12:44:27.996440 master-0 kubenswrapper[6980]: I0313 12:44:27.996394 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" event={"ID":"31442e1e-3f42-4dba-82d5-08e5f8d29a58","Type":"ContainerStarted","Data":"7df36c6c6f752b8666aa44f2e2421974baf69e74c94497d098c2657849fa5cef"} Mar 13 12:44:27.999269 master-0 kubenswrapper[6980]: I0313 12:44:27.999219 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerStarted","Data":"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868"} Mar 13 12:44:27.999386 master-0 kubenswrapper[6980]: I0313 12:44:27.999351 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="kube-rbac-proxy" containerID="cri-o://3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" gracePeriod=30 Mar 13 12:44:27.999504 master-0 kubenswrapper[6980]: I0313 12:44:27.999442 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="machine-approver-controller" containerID="cri-o://8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" gracePeriod=30 Mar 13 12:44:28.001470 master-0 kubenswrapper[6980]: I0313 12:44:28.001410 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerStarted","Data":"d01169f88cb74b74b05f9bbdd9537a1f86586c2ee82594e44b6b203eec9dc752"} Mar 13 12:44:28.001470 master-0 kubenswrapper[6980]: I0313 12:44:28.001458 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerStarted","Data":"1420df773cc584df67e7d58ebf0d1458c5a4185ad5125dd7d19578d21064ab48"} Mar 13 12:44:28.007088 master-0 kubenswrapper[6980]: I0313 12:44:28.006085 6980 generic.go:334] "Generic (PLEG): container finished" podID="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" containerID="4f24aa6f7ba7f467cc1097431b5fb274879298d3aa1e012c074408a731f35aa0" exitCode=0 Mar 13 12:44:28.007088 master-0 kubenswrapper[6980]: I0313 12:44:28.006176 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w8hd" event={"ID":"b6a9184d-0557-4e61-bf31-6dd69c0dfb15","Type":"ContainerDied","Data":"4f24aa6f7ba7f467cc1097431b5fb274879298d3aa1e012c074408a731f35aa0"} Mar 13 12:44:28.009739 master-0 kubenswrapper[6980]: I0313 12:44:28.009563 6980 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:44:28.014753 master-0 kubenswrapper[6980]: I0313 12:44:28.014662 6980 generic.go:334] "Generic (PLEG): container finished" podID="cf9f90f5-643f-41e8-a886-7d19fb064afc" containerID="0d07ff19cee22aeb65be4aac439ca164cb1cde9e958fa7bfb90a8bc5b4af437e" exitCode=0 Mar 13 12:44:28.014854 master-0 kubenswrapper[6980]: I0313 12:44:28.014776 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vng8" event={"ID":"cf9f90f5-643f-41e8-a886-7d19fb064afc","Type":"ContainerDied","Data":"0d07ff19cee22aeb65be4aac439ca164cb1cde9e958fa7bfb90a8bc5b4af437e"} Mar 13 12:44:28.018553 master-0 kubenswrapper[6980]: I0313 12:44:28.018447 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerStarted","Data":"9d44e8bb2becf09a5e1714163e260e9eea95f1c259f54fde8d48b3b6f2a4d308"} Mar 13 12:44:28.027045 master-0 kubenswrapper[6980]: I0313 12:44:28.026996 6980 generic.go:334] "Generic (PLEG): container finished" podID="5623ea13-a34b-4510-8902-341912d115df" containerID="82a12e1f6ddb7f481e1349942a599a492f0112c52f7c9c85db4661268c70ed21" exitCode=0 Mar 13 12:44:28.027128 master-0 kubenswrapper[6980]: I0313 12:44:28.027073 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fdg" event={"ID":"5623ea13-a34b-4510-8902-341912d115df","Type":"ContainerDied","Data":"82a12e1f6ddb7f481e1349942a599a492f0112c52f7c9c85db4661268c70ed21"} Mar 13 12:44:28.031279 master-0 kubenswrapper[6980]: I0313 12:44:28.031208 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" event={"ID":"03758d96-5a20-4cba-92e0-47f5b1a3e558","Type":"ContainerStarted","Data":"9066c501c8952713ec5ae736a9f2e65cb6532723cae2054e921f1b98a058dc41"} Mar 13 12:44:28.033825 master-0 kubenswrapper[6980]: I0313 12:44:28.033771 6980 generic.go:334] "Generic (PLEG): container finished" podID="730e1f43-39b7-41de-ac81-270966725477" containerID="f9c8b4d625f0aef8e218e4d96fd37c573a6bee5d3051b2b0c36d16b60cba363a" exitCode=0 Mar 13 12:44:28.033923 master-0 kubenswrapper[6980]: I0313 12:44:28.033864 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92rsn" event={"ID":"730e1f43-39b7-41de-ac81-270966725477","Type":"ContainerDied","Data":"f9c8b4d625f0aef8e218e4d96fd37c573a6bee5d3051b2b0c36d16b60cba363a"} Mar 13 12:44:28.037608 master-0 kubenswrapper[6980]: I0313 12:44:28.037526 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"a751f96f273041ddc336266c026a8355945b8fdd7d227c4137fab982f5664bcb"} Mar 13 12:44:28.133713 master-0 kubenswrapper[6980]: I0313 12:44:28.133666 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:44:28.281523 master-0 kubenswrapper[6980]: I0313 12:44:28.281403 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls\") pod \"5a3a7953-ad67-432a-a546-71a5d4450ddd\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " Mar 13 12:44:28.281523 master-0 kubenswrapper[6980]: I0313 12:44:28.281471 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msvdh\" (UniqueName: \"kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh\") pod \"5a3a7953-ad67-432a-a546-71a5d4450ddd\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " Mar 13 12:44:28.281793 master-0 kubenswrapper[6980]: I0313 12:44:28.281537 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config\") pod \"5a3a7953-ad67-432a-a546-71a5d4450ddd\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " Mar 13 12:44:28.281793 master-0 kubenswrapper[6980]: I0313 12:44:28.281558 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config\") pod \"5a3a7953-ad67-432a-a546-71a5d4450ddd\" (UID: \"5a3a7953-ad67-432a-a546-71a5d4450ddd\") " Mar 13 12:44:28.282112 master-0 kubenswrapper[6980]: I0313 12:44:28.282081 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config" (OuterVolumeSpecName: "config") pod "5a3a7953-ad67-432a-a546-71a5d4450ddd" (UID: "5a3a7953-ad67-432a-a546-71a5d4450ddd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:44:28.282234 master-0 kubenswrapper[6980]: I0313 12:44:28.282193 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "5a3a7953-ad67-432a-a546-71a5d4450ddd" (UID: "5a3a7953-ad67-432a-a546-71a5d4450ddd"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:44:28.284562 master-0 kubenswrapper[6980]: I0313 12:44:28.284518 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh" (OuterVolumeSpecName: "kube-api-access-msvdh") pod "5a3a7953-ad67-432a-a546-71a5d4450ddd" (UID: "5a3a7953-ad67-432a-a546-71a5d4450ddd"). InnerVolumeSpecName "kube-api-access-msvdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:44:28.284562 master-0 kubenswrapper[6980]: I0313 12:44:28.284547 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "5a3a7953-ad67-432a-a546-71a5d4450ddd" (UID: "5a3a7953-ad67-432a-a546-71a5d4450ddd"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:44:28.382824 master-0 kubenswrapper[6980]: I0313 12:44:28.382738 6980 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/5a3a7953-ad67-432a-a546-71a5d4450ddd-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:28.382824 master-0 kubenswrapper[6980]: I0313 12:44:28.382779 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msvdh\" (UniqueName: \"kubernetes.io/projected/5a3a7953-ad67-432a-a546-71a5d4450ddd-kube-api-access-msvdh\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:28.383138 master-0 kubenswrapper[6980]: I0313 12:44:28.382791 6980 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:28.383138 master-0 kubenswrapper[6980]: I0313 12:44:28.382886 6980 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a3a7953-ad67-432a-a546-71a5d4450ddd-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047426 6980 generic.go:334] "Generic (PLEG): container finished" podID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerID="8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" exitCode=0 Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047457 6980 generic.go:334] "Generic (PLEG): container finished" podID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerID="3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" exitCode=0 Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047513 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerDied","Data":"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868"} Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047527 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047557 6980 scope.go:117] "RemoveContainer" containerID="8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047542 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerDied","Data":"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6"} Mar 13 12:44:29.047672 master-0 kubenswrapper[6980]: I0313 12:44:29.047615 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7" event={"ID":"5a3a7953-ad67-432a-a546-71a5d4450ddd","Type":"ContainerDied","Data":"6486369e198d9165d92d001391330ff9555e2b759135e809935c5fac0c0b0171"} Mar 13 12:44:29.052402 master-0 kubenswrapper[6980]: I0313 12:44:29.051696 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerStarted","Data":"fdca8f603a7412529d48ec01b75993ba4d02eef34abe5398d9e166f2bc343f69"} Mar 13 12:44:29.053008 master-0 kubenswrapper[6980]: I0313 12:44:29.052923 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="cluster-cloud-controller-manager" containerID="cri-o://1420df773cc584df67e7d58ebf0d1458c5a4185ad5125dd7d19578d21064ab48" gracePeriod=30 Mar 13 12:44:29.053146 master-0 kubenswrapper[6980]: I0313 12:44:29.053037 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="kube-rbac-proxy" containerID="cri-o://fdca8f603a7412529d48ec01b75993ba4d02eef34abe5398d9e166f2bc343f69" gracePeriod=30 Mar 13 12:44:29.053146 master-0 kubenswrapper[6980]: I0313 12:44:29.053081 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="config-sync-controllers" containerID="cri-o://d01169f88cb74b74b05f9bbdd9537a1f86586c2ee82594e44b6b203eec9dc752" gracePeriod=30 Mar 13 12:44:29.068326 master-0 kubenswrapper[6980]: I0313 12:44:29.068133 6980 scope.go:117] "RemoveContainer" containerID="3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" Mar 13 12:44:29.121372 master-0 kubenswrapper[6980]: I0313 12:44:29.121231 6980 scope.go:117] "RemoveContainer" containerID="8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" Mar 13 12:44:29.121870 master-0 kubenswrapper[6980]: E0313 12:44:29.121837 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868\": container with ID starting with 8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868 not found: ID does not exist" containerID="8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" Mar 13 12:44:29.121968 master-0 kubenswrapper[6980]: I0313 12:44:29.121881 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868"} err="failed to get container status \"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868\": rpc error: code = NotFound desc = could not find container \"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868\": container with ID starting with 8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868 not found: ID does not exist" Mar 13 12:44:29.121968 master-0 kubenswrapper[6980]: I0313 12:44:29.121911 6980 scope.go:117] "RemoveContainer" containerID="3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" Mar 13 12:44:29.122557 master-0 kubenswrapper[6980]: E0313 12:44:29.122492 6980 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6\": container with ID starting with 3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6 not found: ID does not exist" containerID="3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" Mar 13 12:44:29.122658 master-0 kubenswrapper[6980]: I0313 12:44:29.122602 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6"} err="failed to get container status \"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6\": rpc error: code = NotFound desc = could not find container \"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6\": container with ID starting with 3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6 not found: ID does not exist" Mar 13 12:44:29.122658 master-0 kubenswrapper[6980]: I0313 12:44:29.122646 6980 scope.go:117] "RemoveContainer" containerID="8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868" Mar 13 12:44:29.123070 master-0 kubenswrapper[6980]: I0313 12:44:29.123037 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868"} err="failed to get container status \"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868\": rpc error: code = NotFound desc = could not find container \"8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868\": container with ID starting with 8c2e527fd5d9e616e89c54e7e669abf16b408ca7a240da9111a188a93fd51868 not found: ID does not exist" Mar 13 12:44:29.123143 master-0 kubenswrapper[6980]: I0313 12:44:29.123071 6980 scope.go:117] "RemoveContainer" containerID="3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6" Mar 13 12:44:29.123452 master-0 kubenswrapper[6980]: I0313 12:44:29.123420 6980 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6"} err="failed to get container status \"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6\": rpc error: code = NotFound desc = could not find container \"3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6\": container with ID starting with 3df315360fac64abb3482f40fb004db22f2c6d2257267f4b162990829f2e4ad6 not found: ID does not exist" Mar 13 12:44:30.003869 master-0 kubenswrapper[6980]: E0313 12:44:30.003787 6980 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io master-0)" Mar 13 12:44:30.061951 master-0 kubenswrapper[6980]: I0313 12:44:30.061883 6980 generic.go:334] "Generic (PLEG): container finished" podID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerID="fdca8f603a7412529d48ec01b75993ba4d02eef34abe5398d9e166f2bc343f69" exitCode=0 Mar 13 12:44:30.061951 master-0 kubenswrapper[6980]: I0313 12:44:30.061937 6980 generic.go:334] "Generic (PLEG): container finished" podID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerID="d01169f88cb74b74b05f9bbdd9537a1f86586c2ee82594e44b6b203eec9dc752" exitCode=0 Mar 13 12:44:30.198926 master-0 kubenswrapper[6980]: I0313 12:44:30.061949 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerDied","Data":"fdca8f603a7412529d48ec01b75993ba4d02eef34abe5398d9e166f2bc343f69"} Mar 13 12:44:30.198926 master-0 kubenswrapper[6980]: I0313 12:44:30.062040 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerDied","Data":"d01169f88cb74b74b05f9bbdd9537a1f86586c2ee82594e44b6b203eec9dc752"} Mar 13 12:44:31.075896 master-0 kubenswrapper[6980]: I0313 12:44:31.075803 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerID="0a5b3570a3db3335c8eec162d41987493203c31e437d042c22accb68c0ffa63a" exitCode=0 Mar 13 12:44:31.076708 master-0 kubenswrapper[6980]: I0313 12:44:31.075888 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"f2ae954b-a362-4cd1-8e54-c4aedcf30a00","Type":"ContainerDied","Data":"0a5b3570a3db3335c8eec162d41987493203c31e437d042c22accb68c0ffa63a"} Mar 13 12:44:31.857675 master-0 kubenswrapper[6980]: I0313 12:44:31.857605 6980 prober.go:107] "Probe failed" probeType="Readiness" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:44:32.087377 master-0 kubenswrapper[6980]: I0313 12:44:32.087193 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" exitCode=1 Mar 13 12:44:32.088051 master-0 kubenswrapper[6980]: I0313 12:44:32.087734 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65"} Mar 13 12:44:32.088051 master-0 kubenswrapper[6980]: I0313 12:44:32.087825 6980 scope.go:117] "RemoveContainer" containerID="63f3b75b31fa7fc52cd298f2c204c45e0576c862a52323ba1d17c643900efba4" Mar 13 12:44:32.088704 master-0 kubenswrapper[6980]: I0313 12:44:32.088514 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:44:32.088964 master-0 kubenswrapper[6980]: E0313 12:44:32.088935 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:44:32.397913 master-0 kubenswrapper[6980]: I0313 12:44:32.397867 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:44:32.591872 master-0 kubenswrapper[6980]: I0313 12:44:32.591807 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir\") pod \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " Mar 13 12:44:32.592147 master-0 kubenswrapper[6980]: I0313 12:44:32.591909 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock\") pod \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " Mar 13 12:44:32.592147 master-0 kubenswrapper[6980]: I0313 12:44:32.591974 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access\") pod \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\" (UID: \"f2ae954b-a362-4cd1-8e54-c4aedcf30a00\") " Mar 13 12:44:32.592253 master-0 kubenswrapper[6980]: I0313 12:44:32.592137 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "f2ae954b-a362-4cd1-8e54-c4aedcf30a00" (UID: "f2ae954b-a362-4cd1-8e54-c4aedcf30a00"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:32.592253 master-0 kubenswrapper[6980]: I0313 12:44:32.592205 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock" (OuterVolumeSpecName: "var-lock") pod "f2ae954b-a362-4cd1-8e54-c4aedcf30a00" (UID: "f2ae954b-a362-4cd1-8e54-c4aedcf30a00"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:32.604002 master-0 kubenswrapper[6980]: I0313 12:44:32.603918 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f2ae954b-a362-4cd1-8e54-c4aedcf30a00" (UID: "f2ae954b-a362-4cd1-8e54-c4aedcf30a00"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:44:32.693969 master-0 kubenswrapper[6980]: I0313 12:44:32.693882 6980 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:32.693969 master-0 kubenswrapper[6980]: I0313 12:44:32.693958 6980 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:32.693969 master-0 kubenswrapper[6980]: I0313 12:44:32.693971 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2ae954b-a362-4cd1-8e54-c4aedcf30a00-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:33.106059 master-0 kubenswrapper[6980]: I0313 12:44:33.106022 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"f2ae954b-a362-4cd1-8e54-c4aedcf30a00","Type":"ContainerDied","Data":"2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6"} Mar 13 12:44:33.106417 master-0 kubenswrapper[6980]: I0313 12:44:33.106065 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6" Mar 13 12:44:33.106417 master-0 kubenswrapper[6980]: I0313 12:44:33.106154 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:44:34.116688 master-0 kubenswrapper[6980]: I0313 12:44:34.116544 6980 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069" exitCode=1 Mar 13 12:44:34.116688 master-0 kubenswrapper[6980]: I0313 12:44:34.116616 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069"} Mar 13 12:44:34.116688 master-0 kubenswrapper[6980]: I0313 12:44:34.116675 6980 scope.go:117] "RemoveContainer" containerID="5b237a8f0fb7f64dfadac55f3b8fce83d665c3145bdb4f7b5e426e2db8133d9a" Mar 13 12:44:34.117568 master-0 kubenswrapper[6980]: I0313 12:44:34.117191 6980 scope.go:117] "RemoveContainer" containerID="47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069" Mar 13 12:44:34.117568 master-0 kubenswrapper[6980]: E0313 12:44:34.117477 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-scheduler\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-scheduler pod=bootstrap-kube-scheduler-master-0_kube-system(a1a56802af72ce1aac6b5077f1695ac0)\"" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" Mar 13 12:44:34.410594 master-0 kubenswrapper[6980]: I0313 12:44:34.410514 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:44:34.411403 master-0 kubenswrapper[6980]: I0313 12:44:34.411376 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:44:34.411730 master-0 kubenswrapper[6980]: E0313 12:44:34.411677 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:44:38.190991 master-0 kubenswrapper[6980]: I0313 12:44:38.190909 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6vng8" event={"ID":"cf9f90f5-643f-41e8-a886-7d19fb064afc","Type":"ContainerStarted","Data":"a548c5698b60c0c1ec40a54ad0ffb763b006792c42fd25a03aedc6782c4f7291"} Mar 13 12:44:38.195034 master-0 kubenswrapper[6980]: I0313 12:44:38.194970 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-28fdg" event={"ID":"5623ea13-a34b-4510-8902-341912d115df","Type":"ContainerStarted","Data":"14761a295276b616fdb43bdc6f2485271d8c9f2f6af0ad72d9719d96d2ccaf06"} Mar 13 12:44:38.197923 master-0 kubenswrapper[6980]: I0313 12:44:38.197868 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-92rsn" event={"ID":"730e1f43-39b7-41de-ac81-270966725477","Type":"ContainerStarted","Data":"f1637f083e4a7b0518bd9796a850dbb052dc947799667bf79bb3ea47be744f77"} Mar 13 12:44:38.201647 master-0 kubenswrapper[6980]: I0313 12:44:38.201599 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6w8hd" event={"ID":"b6a9184d-0557-4e61-bf31-6dd69c0dfb15","Type":"ContainerStarted","Data":"88861bacdd669e96749323786ee87c0d8f039ff2417e11e53942d93f33dd8b16"} Mar 13 12:44:40.004893 master-0 kubenswrapper[6980]: E0313 12:44:40.004815 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:44:40.136637 master-0 kubenswrapper[6980]: I0313 12:44:40.136562 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:44:40.137376 master-0 kubenswrapper[6980]: I0313 12:44:40.137351 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:44:40.137695 master-0 kubenswrapper[6980]: E0313 12:44:40.137664 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:44:41.857288 master-0 kubenswrapper[6980]: I0313 12:44:41.857201 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:44:41.858822 master-0 kubenswrapper[6980]: I0313 12:44:41.858768 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:44:41.859169 master-0 kubenswrapper[6980]: E0313 12:44:41.859126 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:44:42.515554 master-0 kubenswrapper[6980]: I0313 12:44:42.515475 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:44:42.515554 master-0 kubenswrapper[6980]: I0313 12:44:42.515558 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:44:42.557622 master-0 kubenswrapper[6980]: I0313 12:44:42.557551 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:44:42.738997 master-0 kubenswrapper[6980]: I0313 12:44:42.738936 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:44:42.738997 master-0 kubenswrapper[6980]: I0313 12:44:42.738993 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:44:42.773396 master-0 kubenswrapper[6980]: I0313 12:44:42.773257 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:44:43.280873 master-0 kubenswrapper[6980]: I0313 12:44:43.280819 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:44:43.281679 master-0 kubenswrapper[6980]: I0313 12:44:43.281649 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:44:43.870880 master-0 kubenswrapper[6980]: E0313 12:44:43.870744 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:44:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:44:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:44:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:44:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[],\\\"sizeBytes\\\":467234714},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[],\\\"sizeBytes\\\":456374430},{\\\"names\\\":[],\\\"sizeBytes\\\":455416776}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:44:44.111295 master-0 kubenswrapper[6980]: I0313 12:44:44.111195 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:44:44.111295 master-0 kubenswrapper[6980]: I0313 12:44:44.111284 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:44:44.147974 master-0 kubenswrapper[6980]: I0313 12:44:44.147912 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:44:44.284093 master-0 kubenswrapper[6980]: I0313 12:44:44.284031 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:44:45.322351 master-0 kubenswrapper[6980]: I0313 12:44:45.322286 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:44:45.323147 master-0 kubenswrapper[6980]: I0313 12:44:45.323131 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:44:45.358515 master-0 kubenswrapper[6980]: I0313 12:44:45.358466 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:44:46.295423 master-0 kubenswrapper[6980]: I0313 12:44:46.295333 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:44:46.859851 master-0 kubenswrapper[6980]: I0313 12:44:46.859763 6980 scope.go:117] "RemoveContainer" containerID="47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069" Mar 13 12:44:47.269072 master-0 kubenswrapper[6980]: I0313 12:44:47.268991 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7"} Mar 13 12:44:47.271894 master-0 kubenswrapper[6980]: I0313 12:44:47.271828 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:44:47.272975 master-0 kubenswrapper[6980]: I0313 12:44:47.272942 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:44:47.273759 master-0 kubenswrapper[6980]: I0313 12:44:47.273728 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:44:47.274242 master-0 kubenswrapper[6980]: I0313 12:44:47.274203 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:44:47.275549 master-0 kubenswrapper[6980]: I0313 12:44:47.275501 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="6b009be90010b458906ee5384812043c64b344c57f3d33c0327bca957e554f6b" exitCode=137 Mar 13 12:44:47.275549 master-0 kubenswrapper[6980]: I0313 12:44:47.275538 6980 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="5174065d158bac4c4f8df59a6fd09da4b437cfcdb6c1e02c2fa3d32ae43403ab" exitCode=137 Mar 13 12:44:47.275662 master-0 kubenswrapper[6980]: I0313 12:44:47.275610 6980 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f0e851477be2d69038712518ed1a5f5d94544dd20cc5ae90880136f09179a721" Mar 13 12:44:47.279639 master-0 kubenswrapper[6980]: I0313 12:44:47.279595 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 12:44:47.280526 master-0 kubenswrapper[6980]: I0313 12:44:47.280475 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 12:44:47.281281 master-0 kubenswrapper[6980]: I0313 12:44:47.281213 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 12:44:47.281795 master-0 kubenswrapper[6980]: I0313 12:44:47.281754 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 12:44:47.283558 master-0 kubenswrapper[6980]: I0313 12:44:47.283505 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:44:47.457306 master-0 kubenswrapper[6980]: I0313 12:44:47.457212 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457306 master-0 kubenswrapper[6980]: I0313 12:44:47.457274 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457306 master-0 kubenswrapper[6980]: I0313 12:44:47.457300 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457354 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457373 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457430 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457479 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457473 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457488 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457469 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457509 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.457747 master-0 kubenswrapper[6980]: I0313 12:44:47.457615 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457808 6980 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457822 6980 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457840 6980 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457887 6980 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457903 6980 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:47.458198 master-0 kubenswrapper[6980]: I0313 12:44:47.457916 6980 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:48.284701 master-0 kubenswrapper[6980]: I0313 12:44:48.284626 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:44:48.866694 master-0 kubenswrapper[6980]: I0313 12:44:48.866622 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 13 12:44:50.005745 master-0 kubenswrapper[6980]: E0313 12:44:50.005638 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:44:52.859872 master-0 kubenswrapper[6980]: I0313 12:44:52.859799 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:44:53.315162 master-0 kubenswrapper[6980]: I0313 12:44:53.315021 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7"} Mar 13 12:44:53.859917 master-0 kubenswrapper[6980]: I0313 12:44:53.859817 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:44:53.871207 master-0 kubenswrapper[6980]: E0313 12:44:53.871136 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:44:53.876896 master-0 kubenswrapper[6980]: I0313 12:44:53.876832 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:44:53.877055 master-0 kubenswrapper[6980]: I0313 12:44:53.876912 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:44:58.480550 master-0 kubenswrapper[6980]: E0313 12:44:58.480362 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-samples-operator-664cb58b85-78swz.189c673ce29dfefa openshift-cluster-samples-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-cluster-samples-operator,Name:cluster-samples-operator-664cb58b85-78swz,UID:5ae6e46f-a465-46e6-bc27-d13fc6f90d8c,APIVersion:v1,ResourceVersion:9842,FieldPath:spec.containers{cluster-samples-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:263827a457b3cc707bdd050873234f5d0892a553af5cfab13f8db75de762d4cf\" in 32.653s (32.653s including waiting). Image size: 455416776 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:44:24.47483673 +0000 UTC m=+331.808831356,LastTimestamp:2026-03-13 12:44:24.47483673 +0000 UTC m=+331.808831356,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:44:58.905115 master-0 kubenswrapper[6980]: I0313 12:44:58.905019 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:44:58.905425 master-0 kubenswrapper[6980]: I0313 12:44:58.905118 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:44:59.349773 master-0 kubenswrapper[6980]: I0313 12:44:59.349615 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-jdwm7_b515c4c5-cec7-46d2-a435-1d46e26c30b8/cluster-cloud-controller-manager/0.log" Mar 13 12:44:59.349773 master-0 kubenswrapper[6980]: I0313 12:44:59.349675 6980 generic.go:334] "Generic (PLEG): container finished" podID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerID="1420df773cc584df67e7d58ebf0d1458c5a4185ad5125dd7d19578d21064ab48" exitCode=137 Mar 13 12:44:59.349773 master-0 kubenswrapper[6980]: I0313 12:44:59.349712 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerDied","Data":"1420df773cc584df67e7d58ebf0d1458c5a4185ad5125dd7d19578d21064ab48"} Mar 13 12:44:59.633848 master-0 kubenswrapper[6980]: I0313 12:44:59.633793 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-jdwm7_b515c4c5-cec7-46d2-a435-1d46e26c30b8/cluster-cloud-controller-manager/0.log" Mar 13 12:44:59.634496 master-0 kubenswrapper[6980]: I0313 12:44:59.633872 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:44:59.783136 master-0 kubenswrapper[6980]: I0313 12:44:59.783072 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjdl6\" (UniqueName: \"kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6\") pod \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " Mar 13 12:44:59.783136 master-0 kubenswrapper[6980]: I0313 12:44:59.783130 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images\") pod \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " Mar 13 12:44:59.783430 master-0 kubenswrapper[6980]: I0313 12:44:59.783184 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls\") pod \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " Mar 13 12:44:59.783430 master-0 kubenswrapper[6980]: I0313 12:44:59.783233 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube\") pod \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " Mar 13 12:44:59.783430 master-0 kubenswrapper[6980]: I0313 12:44:59.783287 6980 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config\") pod \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\" (UID: \"b515c4c5-cec7-46d2-a435-1d46e26c30b8\") " Mar 13 12:44:59.784091 master-0 kubenswrapper[6980]: I0313 12:44:59.783831 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "b515c4c5-cec7-46d2-a435-1d46e26c30b8" (UID: "b515c4c5-cec7-46d2-a435-1d46e26c30b8"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:44:59.784448 master-0 kubenswrapper[6980]: I0313 12:44:59.784392 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "b515c4c5-cec7-46d2-a435-1d46e26c30b8" (UID: "b515c4c5-cec7-46d2-a435-1d46e26c30b8"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:44:59.784448 master-0 kubenswrapper[6980]: I0313 12:44:59.784402 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images" (OuterVolumeSpecName: "images") pod "b515c4c5-cec7-46d2-a435-1d46e26c30b8" (UID: "b515c4c5-cec7-46d2-a435-1d46e26c30b8"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:44:59.786935 master-0 kubenswrapper[6980]: I0313 12:44:59.786887 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "b515c4c5-cec7-46d2-a435-1d46e26c30b8" (UID: "b515c4c5-cec7-46d2-a435-1d46e26c30b8"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:44:59.787064 master-0 kubenswrapper[6980]: I0313 12:44:59.786972 6980 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6" (OuterVolumeSpecName: "kube-api-access-hjdl6") pod "b515c4c5-cec7-46d2-a435-1d46e26c30b8" (UID: "b515c4c5-cec7-46d2-a435-1d46e26c30b8"). InnerVolumeSpecName "kube-api-access-hjdl6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:44:59.884857 master-0 kubenswrapper[6980]: I0313 12:44:59.884807 6980 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjdl6\" (UniqueName: \"kubernetes.io/projected/b515c4c5-cec7-46d2-a435-1d46e26c30b8-kube-api-access-hjdl6\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:59.885110 master-0 kubenswrapper[6980]: I0313 12:44:59.885097 6980 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-images\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:59.885202 master-0 kubenswrapper[6980]: I0313 12:44:59.885184 6980 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/b515c4c5-cec7-46d2-a435-1d46e26c30b8-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:59.885271 master-0 kubenswrapper[6980]: I0313 12:44:59.885260 6980 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/b515c4c5-cec7-46d2-a435-1d46e26c30b8-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 13 12:44:59.885376 master-0 kubenswrapper[6980]: I0313 12:44:59.885361 6980 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b515c4c5-cec7-46d2-a435-1d46e26c30b8-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:45:00.006688 master-0 kubenswrapper[6980]: E0313 12:45:00.006635 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:00.137726 master-0 kubenswrapper[6980]: I0313 12:45:00.137481 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:00.358139 master-0 kubenswrapper[6980]: I0313 12:45:00.358079 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-559568b945-jdwm7_b515c4c5-cec7-46d2-a435-1d46e26c30b8/cluster-cloud-controller-manager/0.log" Mar 13 12:45:00.358356 master-0 kubenswrapper[6980]: I0313 12:45:00.358155 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" event={"ID":"b515c4c5-cec7-46d2-a435-1d46e26c30b8","Type":"ContainerDied","Data":"52951c4c44dc81befb3c2b4f3b24d955875d716dca38e58955a268051a926b8b"} Mar 13 12:45:00.358356 master-0 kubenswrapper[6980]: I0313 12:45:00.358201 6980 scope.go:117] "RemoveContainer" containerID="fdca8f603a7412529d48ec01b75993ba4d02eef34abe5398d9e166f2bc343f69" Mar 13 12:45:00.358356 master-0 kubenswrapper[6980]: I0313 12:45:00.358349 6980 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" Mar 13 12:45:00.382485 master-0 kubenswrapper[6980]: I0313 12:45:00.382446 6980 scope.go:117] "RemoveContainer" containerID="d01169f88cb74b74b05f9bbdd9537a1f86586c2ee82594e44b6b203eec9dc752" Mar 13 12:45:00.397063 master-0 kubenswrapper[6980]: I0313 12:45:00.397029 6980 scope.go:117] "RemoveContainer" containerID="1420df773cc584df67e7d58ebf0d1458c5a4185ad5125dd7d19578d21064ab48" Mar 13 12:45:01.857759 master-0 kubenswrapper[6980]: I0313 12:45:01.857692 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:03.138281 master-0 kubenswrapper[6980]: I0313 12:45:03.138179 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:03.872365 master-0 kubenswrapper[6980]: E0313 12:45:03.872276 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 12:45:08.904482 master-0 kubenswrapper[6980]: I0313 12:45:08.904273 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:45:08.904482 master-0 kubenswrapper[6980]: I0313 12:45:08.904428 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:45:08.905390 master-0 kubenswrapper[6980]: I0313 12:45:08.904553 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:45:08.905786 master-0 kubenswrapper[6980]: I0313 12:45:08.905725 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 13 12:45:08.905892 master-0 kubenswrapper[6980]: I0313 12:45:08.905854 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" containerID="cri-o://28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8" gracePeriod=30 Mar 13 12:45:09.413944 master-0 kubenswrapper[6980]: I0313 12:45:09.413874 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/2.log" Mar 13 12:45:09.414594 master-0 kubenswrapper[6980]: I0313 12:45:09.414554 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/1.log" Mar 13 12:45:09.414755 master-0 kubenswrapper[6980]: I0313 12:45:09.414725 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8" exitCode=255 Mar 13 12:45:09.414850 master-0 kubenswrapper[6980]: I0313 12:45:09.414802 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerDied","Data":"28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8"} Mar 13 12:45:09.414927 master-0 kubenswrapper[6980]: I0313 12:45:09.414911 6980 scope.go:117] "RemoveContainer" containerID="35443773bcdd37ca280fdba5333615f02daa51365a0b805a941d21a3cf11ec6c" Mar 13 12:45:10.007947 master-0 kubenswrapper[6980]: E0313 12:45:10.007865 6980 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:10.008648 master-0 kubenswrapper[6980]: I0313 12:45:10.007956 6980 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:45:10.486104 master-0 kubenswrapper[6980]: I0313 12:45:10.486047 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/2.log" Mar 13 12:45:10.486358 master-0 kubenswrapper[6980]: I0313 12:45:10.486142 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d"} Mar 13 12:45:13.137478 master-0 kubenswrapper[6980]: I0313 12:45:13.137382 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:13.873716 master-0 kubenswrapper[6980]: E0313 12:45:13.873621 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:20.009148 master-0 kubenswrapper[6980]: E0313 12:45:20.009012 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 12:45:23.137675 master-0 kubenswrapper[6980]: I0313 12:45:23.137545 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:23.138109 master-0 kubenswrapper[6980]: I0313 12:45:23.137769 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:23.138863 master-0 kubenswrapper[6980]: I0313 12:45:23.138815 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:45:23.138941 master-0 kubenswrapper[6980]: I0313 12:45:23.138918 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7" gracePeriod=30 Mar 13 12:45:23.564310 master-0 kubenswrapper[6980]: I0313 12:45:23.564244 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7" exitCode=2 Mar 13 12:45:23.564310 master-0 kubenswrapper[6980]: I0313 12:45:23.564294 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7"} Mar 13 12:45:23.564599 master-0 kubenswrapper[6980]: I0313 12:45:23.564327 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7"} Mar 13 12:45:23.564599 master-0 kubenswrapper[6980]: I0313 12:45:23.564346 6980 scope.go:117] "RemoveContainer" containerID="0525d6d9761fef0346024ae4ee861ade4aa61a544af90b2159fea9caf5944f65" Mar 13 12:45:23.874844 master-0 kubenswrapper[6980]: E0313 12:45:23.874727 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:23.874844 master-0 kubenswrapper[6980]: E0313 12:45:23.874764 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:45:25.917488 master-0 kubenswrapper[6980]: I0313 12:45:25.917404 6980 status_manager.go:851] "Failed to get status for pod" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods authentication-operator-7c6989d6c4-ztmrr)" Mar 13 12:45:26.989423 master-0 kubenswrapper[6980]: E0313 12:45:26.989326 6980 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:45:26.989423 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54" Netns:"/var/run/netns/38af0c04-7455-4f99-b56e-101b952eae1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:45:26.989423 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:45:26.989423 master-0 kubenswrapper[6980]: > Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: E0313 12:45:26.989447 6980 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54" Netns:"/var/run/netns/38af0c04-7455-4f99-b56e-101b952eae1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: E0313 12:45:26.989476 6980 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54" Netns:"/var/run/netns/38af0c04-7455-4f99-b56e-101b952eae1d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:45:26.990135 master-0 kubenswrapper[6980]: E0313 12:45:26.989563 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54\\\" Netns:\\\"/var/run/netns/38af0c04-7455-4f99-b56e-101b952eae1d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=296ba43b7fd05f86f82b9be38443967417aebdbb216ae5153c3a45d566f51e54;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" Mar 13 12:45:27.592670 master-0 kubenswrapper[6980]: I0313 12:45:27.592627 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/0.log" Mar 13 12:45:27.592970 master-0 kubenswrapper[6980]: I0313 12:45:27.592686 6980 generic.go:334] "Generic (PLEG): container finished" podID="e0763043-3813-43b6-9618-b2d15c942edb" containerID="e43d4ab25b60b182304f9a1ec5ee5cd5ceec1bb5ed341ff305cc05db8e8062fd" exitCode=1 Mar 13 12:45:27.592970 master-0 kubenswrapper[6980]: I0313 12:45:27.592764 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:45:27.592970 master-0 kubenswrapper[6980]: I0313 12:45:27.592813 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerDied","Data":"e43d4ab25b60b182304f9a1ec5ee5cd5ceec1bb5ed341ff305cc05db8e8062fd"} Mar 13 12:45:27.593216 master-0 kubenswrapper[6980]: I0313 12:45:27.593195 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:45:27.593506 master-0 kubenswrapper[6980]: I0313 12:45:27.593476 6980 scope.go:117] "RemoveContainer" containerID="e43d4ab25b60b182304f9a1ec5ee5cd5ceec1bb5ed341ff305cc05db8e8062fd" Mar 13 12:45:27.880325 master-0 kubenswrapper[6980]: E0313 12:45:27.880266 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:45:27.881101 master-0 kubenswrapper[6980]: I0313 12:45:27.881053 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 12:45:27.900805 master-0 kubenswrapper[6980]: W0313 12:45:27.900723 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea WatchSource:0}: Error finding container e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea: Status 404 returned error can't find the container with id e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea Mar 13 12:45:28.603316 master-0 kubenswrapper[6980]: I0313 12:45:28.603232 6980 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="80909fea02c110e1d4f337c6de383bf687899cf407ef04ed280f279d0fb78b05" exitCode=0 Mar 13 12:45:28.604335 master-0 kubenswrapper[6980]: I0313 12:45:28.603348 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"80909fea02c110e1d4f337c6de383bf687899cf407ef04ed280f279d0fb78b05"} Mar 13 12:45:28.604335 master-0 kubenswrapper[6980]: I0313 12:45:28.603449 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea"} Mar 13 12:45:28.604335 master-0 kubenswrapper[6980]: I0313 12:45:28.603894 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:45:28.604335 master-0 kubenswrapper[6980]: I0313 12:45:28.603915 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:45:28.606758 master-0 kubenswrapper[6980]: I0313 12:45:28.606722 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/0.log" Mar 13 12:45:28.606835 master-0 kubenswrapper[6980]: I0313 12:45:28.606775 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139"} Mar 13 12:45:30.139428 master-0 kubenswrapper[6980]: I0313 12:45:30.139231 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:30.209810 master-0 kubenswrapper[6980]: E0313 12:45:30.209676 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 12:45:31.857466 master-0 kubenswrapper[6980]: I0313 12:45:31.857366 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:32.483222 master-0 kubenswrapper[6980]: E0313 12:45:32.483068 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{certified-operators-6vng8.189c673d046c5cdb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:certified-operators-6vng8,UID:cf9f90f5-643f-41e8-a886-7d19fb064afc,APIVersion:v1,ResourceVersion:9514,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/certified-operator-index:v4.18\" in 41.285s (41.285s including waiting). Image size: 1284762325 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:44:25.042009307 +0000 UTC m=+332.376003933,LastTimestamp:2026-03-13 12:44:25.042009307 +0000 UTC m=+332.376003933,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:45:33.140290 master-0 kubenswrapper[6980]: I0313 12:45:33.140204 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:40.611505 master-0 kubenswrapper[6980]: E0313 12:45:40.611386 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 12:45:43.137913 master-0 kubenswrapper[6980]: I0313 12:45:43.137684 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:43.912036 master-0 kubenswrapper[6980]: E0313 12:45:43.911845 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:45:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:45:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:45:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:45:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:27f5385c5b700fb400a618b51a628f0db39afa4a8db03380252ca5abf49518da\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3d8cd257adb4bde31657aa6b0fe5da54d74b1f9eda5457c8dee929ed64ecece0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221692102},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}]}}\" for node \"master-0\": the server was unable to return a response in the time allotted, but may still be processing the request (patch nodes master-0)" Mar 13 12:45:48.904136 master-0 kubenswrapper[6980]: I0313 12:45:48.904038 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:45:48.904136 master-0 kubenswrapper[6980]: I0313 12:45:48.904123 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:45:51.413714 master-0 kubenswrapper[6980]: E0313 12:45:51.413621 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 12:45:53.137984 master-0 kubenswrapper[6980]: I0313 12:45:53.137882 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:53.138677 master-0 kubenswrapper[6980]: I0313 12:45:53.138066 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:53.138960 master-0 kubenswrapper[6980]: I0313 12:45:53.138921 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:45:53.139041 master-0 kubenswrapper[6980]: I0313 12:45:53.139000 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" gracePeriod=30 Mar 13 12:45:53.343846 master-0 kubenswrapper[6980]: E0313 12:45:53.343804 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:45:53.758823 master-0 kubenswrapper[6980]: I0313 12:45:53.758773 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" exitCode=2 Mar 13 12:45:53.759104 master-0 kubenswrapper[6980]: I0313 12:45:53.758856 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7"} Mar 13 12:45:53.759217 master-0 kubenswrapper[6980]: I0313 12:45:53.759201 6980 scope.go:117] "RemoveContainer" containerID="fd4e7469897ad2a34d49740a7cbc3c467e051315df91103a5d9b65c6adc6a4b7" Mar 13 12:45:53.759731 master-0 kubenswrapper[6980]: I0313 12:45:53.759699 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:45:53.759945 master-0 kubenswrapper[6980]: E0313 12:45:53.759916 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:45:53.913049 master-0 kubenswrapper[6980]: E0313 12:45:53.912974 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:45:54.646727 master-0 kubenswrapper[6980]: I0313 12:45:54.646649 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:45:54.767803 master-0 kubenswrapper[6980]: I0313 12:45:54.767738 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:45:54.768060 master-0 kubenswrapper[6980]: E0313 12:45:54.767999 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:45:54.768850 master-0 kubenswrapper[6980]: I0313 12:45:54.768815 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/1.log" Mar 13 12:45:54.769650 master-0 kubenswrapper[6980]: I0313 12:45:54.769618 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/0.log" Mar 13 12:45:54.769712 master-0 kubenswrapper[6980]: I0313 12:45:54.769666 6980 generic.go:334] "Generic (PLEG): container finished" podID="c1213b50-28bf-43ff-94c4-20616907735b" containerID="8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11" exitCode=1 Mar 13 12:45:54.769712 master-0 kubenswrapper[6980]: I0313 12:45:54.769692 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerDied","Data":"8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11"} Mar 13 12:45:54.769781 master-0 kubenswrapper[6980]: I0313 12:45:54.769722 6980 scope.go:117] "RemoveContainer" containerID="5568c74bf78103146825d0653ed59a230ea4678a37b99c81a8ff3d46062174bd" Mar 13 12:45:54.770177 master-0 kubenswrapper[6980]: I0313 12:45:54.770134 6980 scope.go:117] "RemoveContainer" containerID="8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11" Mar 13 12:45:54.770399 master-0 kubenswrapper[6980]: E0313 12:45:54.770350 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-9nxcz_openshift-ingress-operator(c1213b50-28bf-43ff-94c4-20616907735b)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" podUID="c1213b50-28bf-43ff-94c4-20616907735b" Mar 13 12:45:55.776117 master-0 kubenswrapper[6980]: I0313 12:45:55.776062 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/1.log" Mar 13 12:45:57.787760 master-0 kubenswrapper[6980]: I0313 12:45:57.787695 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/0.log" Mar 13 12:45:57.787760 master-0 kubenswrapper[6980]: I0313 12:45:57.787755 6980 generic.go:334] "Generic (PLEG): container finished" podID="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" containerID="9d44e8bb2becf09a5e1714163e260e9eea95f1c259f54fde8d48b3b6f2a4d308" exitCode=255 Mar 13 12:45:57.788365 master-0 kubenswrapper[6980]: I0313 12:45:57.787796 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerDied","Data":"9d44e8bb2becf09a5e1714163e260e9eea95f1c259f54fde8d48b3b6f2a4d308"} Mar 13 12:45:57.788365 master-0 kubenswrapper[6980]: I0313 12:45:57.788339 6980 scope.go:117] "RemoveContainer" containerID="9d44e8bb2becf09a5e1714163e260e9eea95f1c259f54fde8d48b3b6f2a4d308" Mar 13 12:45:58.796558 master-0 kubenswrapper[6980]: I0313 12:45:58.796488 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/0.log" Mar 13 12:45:58.796558 master-0 kubenswrapper[6980]: I0313 12:45:58.796569 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerStarted","Data":"6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b"} Mar 13 12:45:58.798797 master-0 kubenswrapper[6980]: I0313 12:45:58.798738 6980 generic.go:334] "Generic (PLEG): container finished" podID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerID="712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51" exitCode=0 Mar 13 12:45:58.798899 master-0 kubenswrapper[6980]: I0313 12:45:58.798800 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerDied","Data":"712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51"} Mar 13 12:45:58.799217 master-0 kubenswrapper[6980]: I0313 12:45:58.799166 6980 scope.go:117] "RemoveContainer" containerID="712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51" Mar 13 12:45:58.904360 master-0 kubenswrapper[6980]: I0313 12:45:58.904300 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:45:58.904581 master-0 kubenswrapper[6980]: I0313 12:45:58.904365 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:45:59.806653 master-0 kubenswrapper[6980]: I0313 12:45:59.806575 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerStarted","Data":"78ae5f5f6dbecb618369b89512191ed3dcff14b5aecf6f0222631f845d48f587"} Mar 13 12:45:59.808407 master-0 kubenswrapper[6980]: I0313 12:45:59.806939 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:45:59.808407 master-0 kubenswrapper[6980]: I0313 12:45:59.808244 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:46:02.607377 master-0 kubenswrapper[6980]: E0313 12:46:02.607320 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:46:03.014531 master-0 kubenswrapper[6980]: E0313 12:46:03.014424 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 13 12:46:03.832325 master-0 kubenswrapper[6980]: I0313 12:46:03.832263 6980 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="4e259bb22c1fb9d57fe107d5100650bf71d49eface516a2d4a5344dcf66f776b" exitCode=0 Mar 13 12:46:03.833076 master-0 kubenswrapper[6980]: I0313 12:46:03.832319 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"4e259bb22c1fb9d57fe107d5100650bf71d49eface516a2d4a5344dcf66f776b"} Mar 13 12:46:03.833552 master-0 kubenswrapper[6980]: I0313 12:46:03.833533 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:46:03.833682 master-0 kubenswrapper[6980]: I0313 12:46:03.833665 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:46:03.913986 master-0 kubenswrapper[6980]: E0313 12:46:03.913910 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:05.860024 master-0 kubenswrapper[6980]: I0313 12:46:05.859977 6980 scope.go:117] "RemoveContainer" containerID="8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11" Mar 13 12:46:06.486192 master-0 kubenswrapper[6980]: E0313 12:46:06.485906 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-autoscaler-operator-69576476f7-94zs2.189c673d0470084c openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:cluster-autoscaler-operator-69576476f7-94zs2,UID:6592aa5b-4a50-40f6-80a5-87e3c547018d,APIVersion:v1,ResourceVersion:9843,FieldPath:spec.containers{cluster-autoscaler-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:00b591b3820682dc99f16f07a3a0a4ec06dfedba63cd0f79b998ac4509fabea3\" in 33.528s (33.528s including waiting). Image size: 456374430 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:44:25.042249804 +0000 UTC m=+332.376244440,LastTimestamp:2026-03-13 12:44:25.042249804 +0000 UTC m=+332.376244440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:46:06.854109 master-0 kubenswrapper[6980]: I0313 12:46:06.853978 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/1.log" Mar 13 12:46:06.854721 master-0 kubenswrapper[6980]: I0313 12:46:06.854672 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57"} Mar 13 12:46:08.903491 master-0 kubenswrapper[6980]: I0313 12:46:08.903379 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:46:08.903491 master-0 kubenswrapper[6980]: I0313 12:46:08.903465 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:46:08.904263 master-0 kubenswrapper[6980]: I0313 12:46:08.903533 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:46:08.904346 master-0 kubenswrapper[6980]: I0313 12:46:08.904296 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 13 12:46:08.904421 master-0 kubenswrapper[6980]: I0313 12:46:08.904345 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" containerID="cri-o://becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d" gracePeriod=30 Mar 13 12:46:09.860124 master-0 kubenswrapper[6980]: I0313 12:46:09.860034 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:46:09.860822 master-0 kubenswrapper[6980]: E0313 12:46:09.860315 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:09.875825 master-0 kubenswrapper[6980]: I0313 12:46:09.875766 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/3.log" Mar 13 12:46:09.876517 master-0 kubenswrapper[6980]: I0313 12:46:09.876490 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/2.log" Mar 13 12:46:09.876599 master-0 kubenswrapper[6980]: I0313 12:46:09.876541 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d" exitCode=255 Mar 13 12:46:09.876689 master-0 kubenswrapper[6980]: I0313 12:46:09.876647 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerDied","Data":"becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d"} Mar 13 12:46:09.876874 master-0 kubenswrapper[6980]: I0313 12:46:09.876693 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d"} Mar 13 12:46:09.876874 master-0 kubenswrapper[6980]: I0313 12:46:09.876714 6980 scope.go:117] "RemoveContainer" containerID="28de3c0d71e1d169ce2f9898912a1f5317bd1fcad7bcb8ebdacbc8bc917680f8" Mar 13 12:46:10.882772 master-0 kubenswrapper[6980]: I0313 12:46:10.882671 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/3.log" Mar 13 12:46:11.894867 master-0 kubenswrapper[6980]: I0313 12:46:11.894827 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/2.log" Mar 13 12:46:11.896180 master-0 kubenswrapper[6980]: I0313 12:46:11.896094 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/1.log" Mar 13 12:46:11.896180 master-0 kubenswrapper[6980]: I0313 12:46:11.896141 6980 generic.go:334] "Generic (PLEG): container finished" podID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" containerID="deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c" exitCode=1 Mar 13 12:46:11.896180 master-0 kubenswrapper[6980]: I0313 12:46:11.896174 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerDied","Data":"deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c"} Mar 13 12:46:11.896444 master-0 kubenswrapper[6980]: I0313 12:46:11.896213 6980 scope.go:117] "RemoveContainer" containerID="47445303fded563085ce6c3a29cc03ab2ac1c4b6933c47fecf2b87970e86cfe3" Mar 13 12:46:11.896724 master-0 kubenswrapper[6980]: I0313 12:46:11.896695 6980 scope.go:117] "RemoveContainer" containerID="deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c" Mar 13 12:46:11.897344 master-0 kubenswrapper[6980]: E0313 12:46:11.897299 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-lf2dh_openshift-cluster-storage-operator(1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podUID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" Mar 13 12:46:12.903496 master-0 kubenswrapper[6980]: I0313 12:46:12.903442 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/2.log" Mar 13 12:46:13.915037 master-0 kubenswrapper[6980]: E0313 12:46:13.914968 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 12:46:16.216361 master-0 kubenswrapper[6980]: E0313 12:46:16.215918 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 12:46:23.859475 master-0 kubenswrapper[6980]: I0313 12:46:23.859423 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:46:23.860358 master-0 kubenswrapper[6980]: E0313 12:46:23.859787 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:23.915383 master-0 kubenswrapper[6980]: E0313 12:46:23.915272 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 12:46:23.915383 master-0 kubenswrapper[6980]: E0313 12:46:23.915341 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:46:24.860475 master-0 kubenswrapper[6980]: I0313 12:46:24.860396 6980 scope.go:117] "RemoveContainer" containerID="deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c" Mar 13 12:46:24.861250 master-0 kubenswrapper[6980]: E0313 12:46:24.860840 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-lf2dh_openshift-cluster-storage-operator(1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podUID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" Mar 13 12:46:25.919552 master-0 kubenswrapper[6980]: I0313 12:46:25.919474 6980 status_manager.go:851] "Failed to get status for pod" podUID="6592aa5b-4a50-40f6-80a5-87e3c547018d" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-autoscaler-operator-69576476f7-94zs2)" Mar 13 12:46:26.479327 master-0 kubenswrapper[6980]: I0313 12:46:26.479224 6980 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:46:26.479631 master-0 kubenswrapper[6980]: I0313 12:46:26.479361 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:46:28.067160 master-0 kubenswrapper[6980]: I0313 12:46:28.067091 6980 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="325a312bdd5655125848695bf9ff7bb2b0934ae3b7bbc8f5febd7f2f02b8ee68" exitCode=0 Mar 13 12:46:28.067964 master-0 kubenswrapper[6980]: I0313 12:46:28.067177 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"325a312bdd5655125848695bf9ff7bb2b0934ae3b7bbc8f5febd7f2f02b8ee68"} Mar 13 12:46:28.068691 master-0 kubenswrapper[6980]: I0313 12:46:28.068671 6980 scope.go:117] "RemoveContainer" containerID="325a312bdd5655125848695bf9ff7bb2b0934ae3b7bbc8f5febd7f2f02b8ee68" Mar 13 12:46:28.071860 master-0 kubenswrapper[6980]: I0313 12:46:28.071796 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/1.log" Mar 13 12:46:28.074524 master-0 kubenswrapper[6980]: I0313 12:46:28.074447 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/0.log" Mar 13 12:46:28.074524 master-0 kubenswrapper[6980]: I0313 12:46:28.074518 6980 generic.go:334] "Generic (PLEG): container finished" podID="e0763043-3813-43b6-9618-b2d15c942edb" containerID="a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139" exitCode=1 Mar 13 12:46:28.074919 master-0 kubenswrapper[6980]: I0313 12:46:28.074563 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerDied","Data":"a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139"} Mar 13 12:46:28.074919 master-0 kubenswrapper[6980]: I0313 12:46:28.074629 6980 scope.go:117] "RemoveContainer" containerID="e43d4ab25b60b182304f9a1ec5ee5cd5ceec1bb5ed341ff305cc05db8e8062fd" Mar 13 12:46:28.075509 master-0 kubenswrapper[6980]: I0313 12:46:28.075330 6980 scope.go:117] "RemoveContainer" containerID="a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139" Mar 13 12:46:28.075694 master-0 kubenswrapper[6980]: E0313 12:46:28.075661 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hp84r_openshift-machine-api(e0763043-3813-43b6-9618-b2d15c942edb)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" podUID="e0763043-3813-43b6-9618-b2d15c942edb" Mar 13 12:46:28.256086 master-0 kubenswrapper[6980]: E0313 12:46:28.255997 6980 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:46:28.256086 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f" Netns:"/var/run/netns/52cf56bd-e704-4e1d-bd20-c1d6d4829002" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:46:28.256086 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:46:28.256086 master-0 kubenswrapper[6980]: > Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: E0313 12:46:28.256151 6980 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f" Netns:"/var/run/netns/52cf56bd-e704-4e1d-bd20-c1d6d4829002" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: E0313 12:46:28.256199 6980 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f" Netns:"/var/run/netns/52cf56bd-e704-4e1d-bd20-c1d6d4829002" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:46:28.256368 master-0 kubenswrapper[6980]: E0313 12:46:28.256306 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f\\\" Netns:\\\"/var/run/netns/52cf56bd-e704-4e1d-bd20-c1d6d4829002\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=c4787e6c706a2f0a7db8a7f32442a2b56b8135a2d276cd0aa59ebf21659dae3f;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" Mar 13 12:46:29.081305 master-0 kubenswrapper[6980]: I0313 12:46:29.081243 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf"} Mar 13 12:46:29.083277 master-0 kubenswrapper[6980]: I0313 12:46:29.083253 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/1.log" Mar 13 12:46:29.083766 master-0 kubenswrapper[6980]: I0313 12:46:29.083738 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:46:29.084235 master-0 kubenswrapper[6980]: I0313 12:46:29.084218 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:46:32.617865 master-0 kubenswrapper[6980]: E0313 12:46:32.617763 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:46:34.860877 master-0 kubenswrapper[6980]: I0313 12:46:34.860808 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:46:34.861886 master-0 kubenswrapper[6980]: E0313 12:46:34.861156 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:35.860146 master-0 kubenswrapper[6980]: I0313 12:46:35.860081 6980 scope.go:117] "RemoveContainer" containerID="deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c" Mar 13 12:46:36.134634 master-0 kubenswrapper[6980]: I0313 12:46:36.134511 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/2.log" Mar 13 12:46:36.135334 master-0 kubenswrapper[6980]: I0313 12:46:36.134719 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991"} Mar 13 12:46:37.837878 master-0 kubenswrapper[6980]: E0313 12:46:37.837640 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:46:38.198140 master-0 kubenswrapper[6980]: I0313 12:46:38.198038 6980 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="3fb27622d9e1b78018dcca13d6addb8dfbd6860890e08e5d986124ea734db4f5" exitCode=0 Mar 13 12:46:38.198140 master-0 kubenswrapper[6980]: I0313 12:46:38.198113 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"3fb27622d9e1b78018dcca13d6addb8dfbd6860890e08e5d986124ea734db4f5"} Mar 13 12:46:38.198608 master-0 kubenswrapper[6980]: I0313 12:46:38.198563 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:46:38.198682 master-0 kubenswrapper[6980]: I0313 12:46:38.198623 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:46:40.490446 master-0 kubenswrapper[6980]: E0313 12:46:40.490170 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cluster-baremetal-operator-5cdb4c5598-hp84r.189c673d04701a77 openshift-machine-api 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-api,Name:cluster-baremetal-operator-5cdb4c5598-hp84r,UID:e0763043-3813-43b6-9618-b2d15c942edb,APIVersion:v1,ResourceVersion:9879,FieldPath:spec.containers{cluster-baremetal-operator},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\" in 33.308s (33.308s including waiting). Image size: 470822665 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:44:25.042254455 +0000 UTC m=+332.376249081,LastTimestamp:2026-03-13 12:44:25.042254455 +0000 UTC m=+332.376249081,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:46:43.859522 master-0 kubenswrapper[6980]: I0313 12:46:43.859480 6980 scope.go:117] "RemoveContainer" containerID="a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139" Mar 13 12:46:43.935428 master-0 kubenswrapper[6980]: E0313 12:46:43.929846 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:33Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:33Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:33Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:46:33Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:27f5385c5b700fb400a618b51a628f0db39afa4a8db03380252ca5abf49518da\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3d8cd257adb4bde31657aa6b0fe5da54d74b1f9eda5457c8dee929ed64ecece0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221692102},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:44.237170 master-0 kubenswrapper[6980]: I0313 12:46:44.237133 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/1.log" Mar 13 12:46:44.237873 master-0 kubenswrapper[6980]: I0313 12:46:44.237830 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df"} Mar 13 12:46:45.860458 master-0 kubenswrapper[6980]: I0313 12:46:45.860362 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:46:45.861684 master-0 kubenswrapper[6980]: E0313 12:46:45.860701 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:46:48.259938 master-0 kubenswrapper[6980]: I0313 12:46:48.259874 6980 generic.go:334] "Generic (PLEG): container finished" podID="7343df96-cba2-477b-8a1b-7af369620440" containerID="2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396" exitCode=0 Mar 13 12:46:48.259938 master-0 kubenswrapper[6980]: I0313 12:46:48.259923 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerDied","Data":"2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396"} Mar 13 12:46:48.260565 master-0 kubenswrapper[6980]: I0313 12:46:48.260453 6980 scope.go:117] "RemoveContainer" containerID="2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396" Mar 13 12:46:48.904220 master-0 kubenswrapper[6980]: I0313 12:46:48.904143 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:46:48.904462 master-0 kubenswrapper[6980]: I0313 12:46:48.904230 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:46:49.267164 master-0 kubenswrapper[6980]: I0313 12:46:49.267032 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerStarted","Data":"8bbe2b167360adebde379cc68ee3aad636ef3d2f38f94109c552e500950eb3b4"} Mar 13 12:46:49.267766 master-0 kubenswrapper[6980]: I0313 12:46:49.267403 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:46:49.271241 master-0 kubenswrapper[6980]: I0313 12:46:49.271217 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:46:49.618837 master-0 kubenswrapper[6980]: E0313 12:46:49.618650 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:46:53.931164 master-0 kubenswrapper[6980]: E0313 12:46:53.931045 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:46:56.479451 master-0 kubenswrapper[6980]: I0313 12:46:56.479351 6980 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:46:56.479451 master-0 kubenswrapper[6980]: I0313 12:46:56.479437 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:46:58.904035 master-0 kubenswrapper[6980]: I0313 12:46:58.903951 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:46:58.904899 master-0 kubenswrapper[6980]: I0313 12:46:58.904037 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:46:59.330009 master-0 kubenswrapper[6980]: I0313 12:46:59.329915 6980 generic.go:334] "Generic (PLEG): container finished" podID="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" containerID="2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b" exitCode=0 Mar 13 12:46:59.330274 master-0 kubenswrapper[6980]: I0313 12:46:59.329997 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerDied","Data":"2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b"} Mar 13 12:46:59.330274 master-0 kubenswrapper[6980]: I0313 12:46:59.330095 6980 scope.go:117] "RemoveContainer" containerID="8c953c8136772ca565e28cae4ca94f4cbf7b11aff2c6a974b20aeadfaf72a3c5" Mar 13 12:46:59.331455 master-0 kubenswrapper[6980]: I0313 12:46:59.331426 6980 scope.go:117] "RemoveContainer" containerID="2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b" Mar 13 12:46:59.333727 master-0 kubenswrapper[6980]: E0313 12:46:59.333403 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-cluster-manager pod=ovnkube-control-plane-66b55d57d-dhtgf_openshift-ovn-kubernetes(5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0)\"" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" podUID="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" Mar 13 12:47:00.861244 master-0 kubenswrapper[6980]: I0313 12:47:00.861129 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:47:00.862047 master-0 kubenswrapper[6980]: E0313 12:47:00.861625 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:03.932313 master-0 kubenswrapper[6980]: E0313 12:47:03.932193 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:06.378130 master-0 kubenswrapper[6980]: I0313 12:47:06.377974 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/3.log" Mar 13 12:47:06.378807 master-0 kubenswrapper[6980]: I0313 12:47:06.378388 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/2.log" Mar 13 12:47:06.378807 master-0 kubenswrapper[6980]: I0313 12:47:06.378432 6980 generic.go:334] "Generic (PLEG): container finished" podID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" exitCode=1 Mar 13 12:47:06.378807 master-0 kubenswrapper[6980]: I0313 12:47:06.378462 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerDied","Data":"60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991"} Mar 13 12:47:06.378807 master-0 kubenswrapper[6980]: I0313 12:47:06.378500 6980 scope.go:117] "RemoveContainer" containerID="deb0d636f01065e6f5848894d42a3d7b49a2a87af22671dc2ab13a618bfa4c1c" Mar 13 12:47:06.380992 master-0 kubenswrapper[6980]: I0313 12:47:06.380920 6980 scope.go:117] "RemoveContainer" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" Mar 13 12:47:06.381371 master-0 kubenswrapper[6980]: E0313 12:47:06.381333 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-lf2dh_openshift-cluster-storage-operator(1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podUID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" Mar 13 12:47:06.620246 master-0 kubenswrapper[6980]: E0313 12:47:06.620153 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:47:07.388821 master-0 kubenswrapper[6980]: I0313 12:47:07.388758 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/3.log" Mar 13 12:47:08.903350 master-0 kubenswrapper[6980]: I0313 12:47:08.903243 6980 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-ztmrr container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" start-of-body= Mar 13 12:47:08.903350 master-0 kubenswrapper[6980]: I0313 12:47:08.903339 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.10:8443/healthz\": dial tcp 10.128.0.10:8443: connect: connection refused" Mar 13 12:47:08.904430 master-0 kubenswrapper[6980]: I0313 12:47:08.903416 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:47:08.904430 master-0 kubenswrapper[6980]: I0313 12:47:08.904354 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="authentication-operator" containerStatusID={"Type":"cri-o","ID":"1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d"} pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" containerMessage="Container authentication-operator failed liveness probe, will be restarted" Mar 13 12:47:08.904669 master-0 kubenswrapper[6980]: I0313 12:47:08.904457 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerName="authentication-operator" containerID="cri-o://1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" gracePeriod=30 Mar 13 12:47:09.142119 master-0 kubenswrapper[6980]: E0313 12:47:09.142078 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:47:09.424905 master-0 kubenswrapper[6980]: I0313 12:47:09.423769 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/4.log" Mar 13 12:47:09.424905 master-0 kubenswrapper[6980]: I0313 12:47:09.424275 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/3.log" Mar 13 12:47:09.424905 master-0 kubenswrapper[6980]: I0313 12:47:09.424333 6980 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" exitCode=255 Mar 13 12:47:09.424905 master-0 kubenswrapper[6980]: I0313 12:47:09.424370 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerDied","Data":"1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d"} Mar 13 12:47:09.424905 master-0 kubenswrapper[6980]: I0313 12:47:09.424419 6980 scope.go:117] "RemoveContainer" containerID="becdefbc3ea900370e4c6923974c7ab3a31df7f5471eafde567062de1cbd3e5d" Mar 13 12:47:09.425312 master-0 kubenswrapper[6980]: I0313 12:47:09.425073 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:47:09.425357 master-0 kubenswrapper[6980]: E0313 12:47:09.425329 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:47:09.860181 master-0 kubenswrapper[6980]: I0313 12:47:09.860029 6980 scope.go:117] "RemoveContainer" containerID="2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b" Mar 13 12:47:10.431207 master-0 kubenswrapper[6980]: I0313 12:47:10.431149 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/4.log" Mar 13 12:47:10.434197 master-0 kubenswrapper[6980]: I0313 12:47:10.434154 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" event={"ID":"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0","Type":"ContainerStarted","Data":"2aeb44c5671eded6d51346ef735bebf790b3450ca05b282ff5218f460f269e58"} Mar 13 12:47:12.202308 master-0 kubenswrapper[6980]: E0313 12:47:12.201832 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:13.457147 master-0 kubenswrapper[6980]: I0313 12:47:13.457055 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6c23cdc6601b96e6cd9e782c6c966a61626e147eafd04b183861551e09d61efd"} Mar 13 12:47:13.457147 master-0 kubenswrapper[6980]: I0313 12:47:13.457144 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"cb9739e016267022e31ecd49dd353d0fbb312c39344f7ea6d0f628422bd671c7"} Mar 13 12:47:13.457147 master-0 kubenswrapper[6980]: I0313 12:47:13.457164 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"bbb69daa8aec5294f802e0fd923d615b5bd9b54c9ad727dc130730d0148b4189"} Mar 13 12:47:13.457828 master-0 kubenswrapper[6980]: I0313 12:47:13.457175 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"957fff4e2e7c0e348e670f1e1bbd14c0b7e69017fdbbdc00acbf646f8f370e16"} Mar 13 12:47:13.457828 master-0 kubenswrapper[6980]: I0313 12:47:13.457185 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"93fcfaf9014e1420b0964c2d7727fae3c21363a8bdefab275c2ffbb8ad00d4b9"} Mar 13 12:47:13.457828 master-0 kubenswrapper[6980]: I0313 12:47:13.457611 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:47:13.457828 master-0 kubenswrapper[6980]: I0313 12:47:13.457640 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:47:13.860914 master-0 kubenswrapper[6980]: I0313 12:47:13.860812 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:47:13.932977 master-0 kubenswrapper[6980]: E0313 12:47:13.932829 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:14.468807 master-0 kubenswrapper[6980]: I0313 12:47:14.468692 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561"} Mar 13 12:47:14.494718 master-0 kubenswrapper[6980]: E0313 12:47:14.494493 6980 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{authentication-operator-7c6989d6c4-ztmrr.189c66f4818e5338 openshift-authentication-operator 7928 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-authentication-operator,Name:authentication-operator-7c6989d6c4-ztmrr,UID:f2a74c2a-8376-4998-bdc6-02a978f1f568,APIVersion:v1,ResourceVersion:3695,FieldPath:spec.containers{authentication-operator},},Reason:Created,Message:Created container: authentication-operator,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:39:13 +0000 UTC,LastTimestamp:2026-03-13 12:44:25.461388115 +0000 UTC m=+332.795382741,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:47:17.882362 master-0 kubenswrapper[6980]: I0313 12:47:17.882267 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:17.882362 master-0 kubenswrapper[6980]: I0313 12:47:17.882345 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:19.881256 master-0 kubenswrapper[6980]: I0313 12:47:19.881180 6980 scope.go:117] "RemoveContainer" containerID="f03432950be1db5b9603e1c8d4f0c02f9b3f872ef406dc3fb4113432dc294cf7" Mar 13 12:47:19.897895 master-0 kubenswrapper[6980]: I0313 12:47:19.897844 6980 scope.go:117] "RemoveContainer" containerID="f08c1a97d40cbbcf3932165b6ed54164f78f18d905db2ebc7a4ee45115dbb224" Mar 13 12:47:19.913259 master-0 kubenswrapper[6980]: I0313 12:47:19.913189 6980 scope.go:117] "RemoveContainer" containerID="1548830c5fd6aedb1c3d4d7d2384fdb131b3d8e72ab94a40c5ef20cdca9c52d5" Mar 13 12:47:20.137019 master-0 kubenswrapper[6980]: I0313 12:47:20.136935 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:21.857660 master-0 kubenswrapper[6980]: I0313 12:47:21.857536 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:21.859128 master-0 kubenswrapper[6980]: I0313 12:47:21.859092 6980 scope.go:117] "RemoveContainer" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" Mar 13 12:47:21.859391 master-0 kubenswrapper[6980]: E0313 12:47:21.859345 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-lf2dh_openshift-cluster-storage-operator(1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podUID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" Mar 13 12:47:23.137838 master-0 kubenswrapper[6980]: I0313 12:47:23.137734 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:23.621340 master-0 kubenswrapper[6980]: E0313 12:47:23.621166 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:47:23.934138 master-0 kubenswrapper[6980]: E0313 12:47:23.934019 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:23.934138 master-0 kubenswrapper[6980]: E0313 12:47:23.934094 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:47:24.860936 master-0 kubenswrapper[6980]: I0313 12:47:24.860847 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:47:24.861628 master-0 kubenswrapper[6980]: E0313 12:47:24.861129 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:47:25.921698 master-0 kubenswrapper[6980]: I0313 12:47:25.921636 6980 status_manager.go:851] "Failed to get status for pod" podUID="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-storage-operator-6fbfc8dc8f-hr4ws)" Mar 13 12:47:26.478898 master-0 kubenswrapper[6980]: I0313 12:47:26.478833 6980 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:47:26.479177 master-0 kubenswrapper[6980]: I0313 12:47:26.478925 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:47:26.479177 master-0 kubenswrapper[6980]: I0313 12:47:26.478978 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:47:26.479912 master-0 kubenswrapper[6980]: I0313 12:47:26.479867 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c"} pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:47:26.480008 master-0 kubenswrapper[6980]: I0313 12:47:26.479962 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" containerID="cri-o://eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c" gracePeriod=600 Mar 13 12:47:27.578023 master-0 kubenswrapper[6980]: I0313 12:47:27.577848 6980 generic.go:334] "Generic (PLEG): container finished" podID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerID="eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c" exitCode=0 Mar 13 12:47:27.578023 master-0 kubenswrapper[6980]: I0313 12:47:27.577925 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerDied","Data":"eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c"} Mar 13 12:47:27.578023 master-0 kubenswrapper[6980]: I0313 12:47:27.577969 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b"} Mar 13 12:47:27.909615 master-0 kubenswrapper[6980]: I0313 12:47:27.909539 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:29.592555 master-0 kubenswrapper[6980]: I0313 12:47:29.591523 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/1.log" Mar 13 12:47:29.592555 master-0 kubenswrapper[6980]: I0313 12:47:29.592173 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/0.log" Mar 13 12:47:29.592555 master-0 kubenswrapper[6980]: I0313 12:47:29.592225 6980 generic.go:334] "Generic (PLEG): container finished" podID="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" containerID="6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b" exitCode=255 Mar 13 12:47:29.592555 master-0 kubenswrapper[6980]: I0313 12:47:29.592269 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerDied","Data":"6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b"} Mar 13 12:47:29.592555 master-0 kubenswrapper[6980]: I0313 12:47:29.592336 6980 scope.go:117] "RemoveContainer" containerID="9d44e8bb2becf09a5e1714163e260e9eea95f1c259f54fde8d48b3b6f2a4d308" Mar 13 12:47:29.593226 master-0 kubenswrapper[6980]: I0313 12:47:29.592942 6980 scope.go:117] "RemoveContainer" containerID="6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b" Mar 13 12:47:29.593297 master-0 kubenswrapper[6980]: E0313 12:47:29.593235 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-storage-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-storage-operator pod=cluster-storage-operator-6fbfc8dc8f-hr4ws_openshift-cluster-storage-operator(b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2)\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" podUID="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" Mar 13 12:47:29.742036 master-0 kubenswrapper[6980]: E0313 12:47:29.741976 6980 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:47:29.742036 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff" Netns:"/var/run/netns/7a34da18-327c-4021-a0d2-1f42d69a63aa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:47:29.742036 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:47:29.742036 master-0 kubenswrapper[6980]: > Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: E0313 12:47:29.742091 6980 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff" Netns:"/var/run/netns/7a34da18-327c-4021-a0d2-1f42d69a63aa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: E0313 12:47:29.742120 6980 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff" Netns:"/var/run/netns/7a34da18-327c-4021-a0d2-1f42d69a63aa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:47:29.742365 master-0 kubenswrapper[6980]: E0313 12:47:29.742228 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff\\\" Netns:\\\"/var/run/netns/7a34da18-327c-4021-a0d2-1f42d69a63aa\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=e2b32fb8dc0891e9bb09724d985801b4a4f7c4ad697794ecf0a02189e448ecff;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" Mar 13 12:47:30.600674 master-0 kubenswrapper[6980]: I0313 12:47:30.600580 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/1.log" Mar 13 12:47:30.601327 master-0 kubenswrapper[6980]: I0313 12:47:30.600770 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:47:30.601327 master-0 kubenswrapper[6980]: I0313 12:47:30.601132 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:47:32.895404 master-0 kubenswrapper[6980]: I0313 12:47:32.895339 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:33.137856 master-0 kubenswrapper[6980]: I0313 12:47:33.137772 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:36.859819 master-0 kubenswrapper[6980]: I0313 12:47:36.859737 6980 scope.go:117] "RemoveContainer" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" Mar 13 12:47:36.860702 master-0 kubenswrapper[6980]: E0313 12:47:36.860036 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-lf2dh_openshift-cluster-storage-operator(1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" podUID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" Mar 13 12:47:39.860412 master-0 kubenswrapper[6980]: I0313 12:47:39.860337 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:47:39.861127 master-0 kubenswrapper[6980]: E0313 12:47:39.860534 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:47:40.623845 master-0 kubenswrapper[6980]: E0313 12:47:40.623774 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:47:43.137839 master-0 kubenswrapper[6980]: I0313 12:47:43.137736 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:43.138686 master-0 kubenswrapper[6980]: I0313 12:47:43.137874 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:43.138686 master-0 kubenswrapper[6980]: I0313 12:47:43.138626 6980 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:47:43.138839 master-0 kubenswrapper[6980]: I0313 12:47:43.138703 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" gracePeriod=30 Mar 13 12:47:43.256737 master-0 kubenswrapper[6980]: E0313 12:47:43.256683 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:43.676213 master-0 kubenswrapper[6980]: I0313 12:47:43.676133 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" exitCode=2 Mar 13 12:47:43.676213 master-0 kubenswrapper[6980]: I0313 12:47:43.676194 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561"} Mar 13 12:47:43.676533 master-0 kubenswrapper[6980]: I0313 12:47:43.676242 6980 scope.go:117] "RemoveContainer" containerID="db89779c57e5384ab9cc88749090d6c70379e9b3a7dfbb9e8c64f63e40fa55d7" Mar 13 12:47:43.676912 master-0 kubenswrapper[6980]: I0313 12:47:43.676854 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:47:43.677337 master-0 kubenswrapper[6980]: E0313 12:47:43.677253 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:44.096173 master-0 kubenswrapper[6980]: E0313 12:47:44.095962 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:47:34Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:27f5385c5b700fb400a618b51a628f0db39afa4a8db03380252ca5abf49518da\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3d8cd257adb4bde31657aa6b0fe5da54d74b1f9eda5457c8dee929ed64ecece0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221692102},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:47:44.410090 master-0 kubenswrapper[6980]: I0313 12:47:44.409991 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:47:44.682713 master-0 kubenswrapper[6980]: I0313 12:47:44.682557 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 12:47:44.683064 master-0 kubenswrapper[6980]: I0313 12:47:44.683036 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/1.log" Mar 13 12:47:44.683349 master-0 kubenswrapper[6980]: I0313 12:47:44.683312 6980 generic.go:334] "Generic (PLEG): container finished" podID="e0763043-3813-43b6-9618-b2d15c942edb" containerID="5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df" exitCode=1 Mar 13 12:47:44.683405 master-0 kubenswrapper[6980]: I0313 12:47:44.683363 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerDied","Data":"5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df"} Mar 13 12:47:44.683405 master-0 kubenswrapper[6980]: I0313 12:47:44.683397 6980 scope.go:117] "RemoveContainer" containerID="a91387c0d2edba9c3d2e8ad94948ab7bd5619f72dc7e614cae4066aa84f51139" Mar 13 12:47:44.683914 master-0 kubenswrapper[6980]: I0313 12:47:44.683887 6980 scope.go:117] "RemoveContainer" containerID="5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df" Mar 13 12:47:44.684191 master-0 kubenswrapper[6980]: E0313 12:47:44.684153 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hp84r_openshift-machine-api(e0763043-3813-43b6-9618-b2d15c942edb)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" podUID="e0763043-3813-43b6-9618-b2d15c942edb" Mar 13 12:47:44.689872 master-0 kubenswrapper[6980]: I0313 12:47:44.689832 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:47:44.690085 master-0 kubenswrapper[6980]: E0313 12:47:44.690051 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:44.860417 master-0 kubenswrapper[6980]: I0313 12:47:44.860334 6980 scope.go:117] "RemoveContainer" containerID="6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b" Mar 13 12:47:45.697818 master-0 kubenswrapper[6980]: I0313 12:47:45.697725 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/1.log" Mar 13 12:47:45.698636 master-0 kubenswrapper[6980]: I0313 12:47:45.697843 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" event={"ID":"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2","Type":"ContainerStarted","Data":"9dd8abe64db52c5877e980d3c2ef22411b5fee4ae92c2368133db3cb8a2b73b8"} Mar 13 12:47:45.699948 master-0 kubenswrapper[6980]: I0313 12:47:45.699905 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 12:47:47.460772 master-0 kubenswrapper[6980]: E0313 12:47:47.460703 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:47:47.711541 master-0 kubenswrapper[6980]: I0313 12:47:47.711391 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:47:47.711541 master-0 kubenswrapper[6980]: I0313 12:47:47.711427 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:47:50.859610 master-0 kubenswrapper[6980]: I0313 12:47:50.859531 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:47:50.860229 master-0 kubenswrapper[6980]: E0313 12:47:50.859834 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:47:50.860504 master-0 kubenswrapper[6980]: I0313 12:47:50.860452 6980 scope.go:117] "RemoveContainer" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" Mar 13 12:47:51.740532 master-0 kubenswrapper[6980]: I0313 12:47:51.740464 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/3.log" Mar 13 12:47:51.740532 master-0 kubenswrapper[6980]: I0313 12:47:51.740523 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" event={"ID":"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53","Type":"ContainerStarted","Data":"732fc22a14bbb4c9e3471d5b6a3f3b5fef4ebe3db808957d59404a9b9e3ec4c7"} Mar 13 12:47:54.096658 master-0 kubenswrapper[6980]: E0313 12:47:54.096419 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 12:47:56.860346 master-0 kubenswrapper[6980]: I0313 12:47:56.860271 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:47:56.861435 master-0 kubenswrapper[6980]: E0313 12:47:56.860680 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:47:57.625999 master-0 kubenswrapper[6980]: E0313 12:47:57.625870 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:47:57.860050 master-0 kubenswrapper[6980]: I0313 12:47:57.859981 6980 scope.go:117] "RemoveContainer" containerID="5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df" Mar 13 12:47:57.860307 master-0 kubenswrapper[6980]: E0313 12:47:57.860247 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-hp84r_openshift-machine-api(e0763043-3813-43b6-9618-b2d15c942edb)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" podUID="e0763043-3813-43b6-9618-b2d15c942edb" Mar 13 12:48:04.096952 master-0 kubenswrapper[6980]: E0313 12:48:04.096851 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:04.860065 master-0 kubenswrapper[6980]: I0313 12:48:04.860004 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:48:04.860330 master-0 kubenswrapper[6980]: E0313 12:48:04.860248 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:48:06.824044 master-0 kubenswrapper[6980]: I0313 12:48:06.823976 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/2.log" Mar 13 12:48:06.824811 master-0 kubenswrapper[6980]: I0313 12:48:06.824700 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/1.log" Mar 13 12:48:06.825201 master-0 kubenswrapper[6980]: I0313 12:48:06.825132 6980 generic.go:334] "Generic (PLEG): container finished" podID="c1213b50-28bf-43ff-94c4-20616907735b" containerID="59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57" exitCode=1 Mar 13 12:48:06.825201 master-0 kubenswrapper[6980]: I0313 12:48:06.825198 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerDied","Data":"59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57"} Mar 13 12:48:06.825348 master-0 kubenswrapper[6980]: I0313 12:48:06.825241 6980 scope.go:117] "RemoveContainer" containerID="8d4eec45db8103811e7a9ea0a4ee194d4eaf95e2b884bee0f9c64da3657f0e11" Mar 13 12:48:06.826948 master-0 kubenswrapper[6980]: I0313 12:48:06.826401 6980 scope.go:117] "RemoveContainer" containerID="59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57" Mar 13 12:48:06.826948 master-0 kubenswrapper[6980]: E0313 12:48:06.826679 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-9nxcz_openshift-ingress-operator(c1213b50-28bf-43ff-94c4-20616907735b)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" podUID="c1213b50-28bf-43ff-94c4-20616907735b" Mar 13 12:48:07.835746 master-0 kubenswrapper[6980]: I0313 12:48:07.835514 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/2.log" Mar 13 12:48:09.860189 master-0 kubenswrapper[6980]: I0313 12:48:09.860090 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:09.861171 master-0 kubenswrapper[6980]: E0313 12:48:09.860388 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:11.860489 master-0 kubenswrapper[6980]: I0313 12:48:11.860422 6980 scope.go:117] "RemoveContainer" containerID="5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df" Mar 13 12:48:12.870060 master-0 kubenswrapper[6980]: I0313 12:48:12.870025 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 12:48:12.871273 master-0 kubenswrapper[6980]: I0313 12:48:12.871207 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" event={"ID":"e0763043-3813-43b6-9618-b2d15c942edb","Type":"ContainerStarted","Data":"06790cd4ff0dc1771ffdbb232edd8dee65f805d9cad62778438e81773ee7f47a"} Mar 13 12:48:14.097925 master-0 kubenswrapper[6980]: E0313 12:48:14.097824 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:14.627399 master-0 kubenswrapper[6980]: E0313 12:48:14.627309 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:48:17.859412 master-0 kubenswrapper[6980]: I0313 12:48:17.859339 6980 scope.go:117] "RemoveContainer" containerID="59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57" Mar 13 12:48:17.860349 master-0 kubenswrapper[6980]: E0313 12:48:17.859646 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ingress-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ingress-operator pod=ingress-operator-677db989d6-9nxcz_openshift-ingress-operator(c1213b50-28bf-43ff-94c4-20616907735b)\"" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" podUID="c1213b50-28bf-43ff-94c4-20616907735b" Mar 13 12:48:19.859417 master-0 kubenswrapper[6980]: I0313 12:48:19.859284 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:48:19.860818 master-0 kubenswrapper[6980]: E0313 12:48:19.859564 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"authentication-operator\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=authentication-operator pod=authentication-operator-7c6989d6c4-ztmrr_openshift-authentication-operator(f2a74c2a-8376-4998-bdc6-02a978f1f568)\"" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" podUID="f2a74c2a-8376-4998-bdc6-02a978f1f568" Mar 13 12:48:20.860069 master-0 kubenswrapper[6980]: I0313 12:48:20.860012 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:20.861550 master-0 kubenswrapper[6980]: E0313 12:48:20.861127 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:21.714609 master-0 kubenswrapper[6980]: E0313 12:48:21.714506 6980 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 12:48:24.099493 master-0 kubenswrapper[6980]: E0313 12:48:24.099291 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:24.099493 master-0 kubenswrapper[6980]: E0313 12:48:24.099377 6980 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:48:25.923257 master-0 kubenswrapper[6980]: I0313 12:48:25.923177 6980 status_manager.go:851] "Failed to get status for pod" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-cloud-controller-manager-operator-559568b945-jdwm7)" Mar 13 12:48:29.078499 master-0 kubenswrapper[6980]: I0313 12:48:29.078388 6980 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf" exitCode=0 Mar 13 12:48:29.079198 master-0 kubenswrapper[6980]: I0313 12:48:29.078490 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf"} Mar 13 12:48:29.079198 master-0 kubenswrapper[6980]: I0313 12:48:29.078626 6980 scope.go:117] "RemoveContainer" containerID="325a312bdd5655125848695bf9ff7bb2b0934ae3b7bbc8f5febd7f2f02b8ee68" Mar 13 12:48:29.079843 master-0 kubenswrapper[6980]: I0313 12:48:29.079802 6980 scope.go:117] "RemoveContainer" containerID="94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf" Mar 13 12:48:29.081899 master-0 kubenswrapper[6980]: E0313 12:48:29.081831 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:48:30.860387 master-0 kubenswrapper[6980]: I0313 12:48:30.860313 6980 scope.go:117] "RemoveContainer" containerID="59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57" Mar 13 12:48:31.095470 master-0 kubenswrapper[6980]: I0313 12:48:31.095410 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/2.log" Mar 13 12:48:31.095827 master-0 kubenswrapper[6980]: I0313 12:48:31.095786 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" event={"ID":"c1213b50-28bf-43ff-94c4-20616907735b","Type":"ContainerStarted","Data":"7ce72586c58e23b561d401a78a6fadbbf8f75f65a1642d4f420bea4838ced284"} Mar 13 12:48:31.243726 master-0 kubenswrapper[6980]: E0313 12:48:31.243651 6980 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:48:31.243726 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc" Netns:"/var/run/netns/181c7b97-1fad-4037-9486-462e15c9ef2c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:48:31.243726 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:48:31.243726 master-0 kubenswrapper[6980]: > Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: E0313 12:48:31.243768 6980 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc" Netns:"/var/run/netns/181c7b97-1fad-4037-9486-462e15c9ef2c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: E0313 12:48:31.243807 6980 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc" Netns:"/var/run/netns/181c7b97-1fad-4037-9486-462e15c9ef2c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:48:31.243943 master-0 kubenswrapper[6980]: > pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:48:31.244190 master-0 kubenswrapper[6980]: E0313 12:48:31.243879 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-retry-1-master-0_openshift-kube-apiserver(bc244427-5e4e-441c-a04d-f93aeca9b7c1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-retry-1-master-0_openshift-kube-apiserver_bc244427-5e4e-441c-a04d-f93aeca9b7c1_0(1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc): error adding pod openshift-kube-apiserver_installer-1-retry-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc\\\" Netns:\\\"/var/run/netns/181c7b97-1fad-4037-9486-462e15c9ef2c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-retry-1-master-0;K8S_POD_INFRA_CONTAINER_ID=1d49a906bc96029d6f03aa4403bb0991ec4bb9e71427a0f84e328a6b163b07fc;K8S_POD_UID=bc244427-5e4e-441c-a04d-f93aeca9b7c1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-retry-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-retry-1-master-0/bc244427-5e4e-441c-a04d-f93aeca9b7c1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-retry-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-retry-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" Mar 13 12:48:31.628882 master-0 kubenswrapper[6980]: E0313 12:48:31.628677 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:48:31.860261 master-0 kubenswrapper[6980]: I0313 12:48:31.860212 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:31.860942 master-0 kubenswrapper[6980]: E0313 12:48:31.860904 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:32.101447 master-0 kubenswrapper[6980]: I0313 12:48:32.101374 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:48:32.102003 master-0 kubenswrapper[6980]: I0313 12:48:32.101972 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:48:34.860747 master-0 kubenswrapper[6980]: I0313 12:48:34.860640 6980 scope.go:117] "RemoveContainer" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" Mar 13 12:48:35.127312 master-0 kubenswrapper[6980]: I0313 12:48:35.127101 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/4.log" Mar 13 12:48:35.127312 master-0 kubenswrapper[6980]: I0313 12:48:35.127243 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" event={"ID":"f2a74c2a-8376-4998-bdc6-02a978f1f568","Type":"ContainerStarted","Data":"3c844592236a7ed36c368de7f8152fe55c0bf83959be2ba35da9ac43144c285f"} Mar 13 12:48:42.859738 master-0 kubenswrapper[6980]: I0313 12:48:42.859633 6980 scope.go:117] "RemoveContainer" containerID="94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf" Mar 13 12:48:43.177889 master-0 kubenswrapper[6980]: I0313 12:48:43.177838 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"ac14ccdadcb85cedb903872ea2dbb40876363197cf4a91aa1d4403565a354eb1"} Mar 13 12:48:44.336691 master-0 kubenswrapper[6980]: E0313 12:48:44.336477 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:34Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:34Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:34Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:48:34Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:4855408bd0e4d0711383d0c14dcad53c98255ff9f83f6cbefb57e47eacc1f1f1\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:97bdbb5854e4ad7976209a44cff02c8a2b9542f58ad007c06a5c3a5e8266def1\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284762325},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:27f5385c5b700fb400a618b51a628f0db39afa4a8db03380252ca5abf49518da\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:3d8cd257adb4bde31657aa6b0fe5da54d74b1f9eda5457c8dee929ed64ecece0\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221692102},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d74fe7cb12c554c120262683d9c4066f33ae4f60a5fad83cba419d851b98c12d\\\"],\\\"sizeBytes\\\":470822665},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e9ee63a30a9b95b5801afa36e09fc583ec2cda3c5cb3c8676e478fea016abfa1\\\"],\\\"sizeBytes\\\":470680779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:44.861437 master-0 kubenswrapper[6980]: I0313 12:48:44.861331 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:44.861862 master-0 kubenswrapper[6980]: E0313 12:48:44.861535 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:48.630911 master-0 kubenswrapper[6980]: E0313 12:48:48.630713 6980 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 12:48:49.221979 master-0 kubenswrapper[6980]: I0313 12:48:49.221928 6980 generic.go:334] "Generic (PLEG): container finished" podID="a6a45be0-19ef-4d36-b8a7-eb2705d24bfa" containerID="4b9e882a01cdfbc8bf7760e0d86d536a94312b94c74000951cc0b9a06f2c288b" exitCode=0 Mar 13 12:48:49.222234 master-0 kubenswrapper[6980]: I0313 12:48:49.221997 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" event={"ID":"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa","Type":"ContainerDied","Data":"4b9e882a01cdfbc8bf7760e0d86d536a94312b94c74000951cc0b9a06f2c288b"} Mar 13 12:48:49.222532 master-0 kubenswrapper[6980]: I0313 12:48:49.222501 6980 scope.go:117] "RemoveContainer" containerID="4b9e882a01cdfbc8bf7760e0d86d536a94312b94c74000951cc0b9a06f2c288b" Mar 13 12:48:49.229884 master-0 kubenswrapper[6980]: I0313 12:48:49.229829 6980 generic.go:334] "Generic (PLEG): container finished" podID="73dc5747-2d30-4a2d-a784-1dea1e10811d" containerID="f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08" exitCode=0 Mar 13 12:48:49.229972 master-0 kubenswrapper[6980]: I0313 12:48:49.229922 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerDied","Data":"f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08"} Mar 13 12:48:49.230047 master-0 kubenswrapper[6980]: I0313 12:48:49.229973 6980 scope.go:117] "RemoveContainer" containerID="d691dfff8d938f7ef898022014143d56dbbe1b4283d8d74c7b7938096f18aafe" Mar 13 12:48:49.230602 master-0 kubenswrapper[6980]: I0313 12:48:49.230552 6980 scope.go:117] "RemoveContainer" containerID="f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08" Mar 13 12:48:49.230838 master-0 kubenswrapper[6980]: E0313 12:48:49.230792 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-apiserver-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-apiserver-operator pod=openshift-apiserver-operator-799b6db4d7-74fhg_openshift-apiserver-operator(73dc5747-2d30-4a2d-a784-1dea1e10811d)\"" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" podUID="73dc5747-2d30-4a2d-a784-1dea1e10811d" Mar 13 12:48:49.236519 master-0 kubenswrapper[6980]: I0313 12:48:49.236424 6980 generic.go:334] "Generic (PLEG): container finished" podID="1e9803a4-a166-42dc-9498-57e213602684" containerID="0b8ffb9009d34dca0914bb1efe6a7d4b6106f10f28097f2ee3fe0b233ae17b98" exitCode=0 Mar 13 12:48:49.236519 master-0 kubenswrapper[6980]: I0313 12:48:49.236498 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" event={"ID":"1e9803a4-a166-42dc-9498-57e213602684","Type":"ContainerDied","Data":"0b8ffb9009d34dca0914bb1efe6a7d4b6106f10f28097f2ee3fe0b233ae17b98"} Mar 13 12:48:49.237064 master-0 kubenswrapper[6980]: I0313 12:48:49.237020 6980 scope.go:117] "RemoveContainer" containerID="0b8ffb9009d34dca0914bb1efe6a7d4b6106f10f28097f2ee3fe0b233ae17b98" Mar 13 12:48:49.239016 master-0 kubenswrapper[6980]: I0313 12:48:49.238960 6980 generic.go:334] "Generic (PLEG): container finished" podID="16c2d774-967f-4964-ab4e-eb13c4364f63" containerID="03adaefddde685072ec465ec3fa62e611b8564796fc923070952faebdeec68f6" exitCode=0 Mar 13 12:48:49.239016 master-0 kubenswrapper[6980]: I0313 12:48:49.239011 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" event={"ID":"16c2d774-967f-4964-ab4e-eb13c4364f63","Type":"ContainerDied","Data":"03adaefddde685072ec465ec3fa62e611b8564796fc923070952faebdeec68f6"} Mar 13 12:48:49.239303 master-0 kubenswrapper[6980]: I0313 12:48:49.239269 6980 scope.go:117] "RemoveContainer" containerID="03adaefddde685072ec465ec3fa62e611b8564796fc923070952faebdeec68f6" Mar 13 12:48:49.241868 master-0 kubenswrapper[6980]: I0313 12:48:49.241826 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-w8b7h_20217cff-2f81-4a56-9c15-28385c19258c/package-server-manager/0.log" Mar 13 12:48:49.244721 master-0 kubenswrapper[6980]: I0313 12:48:49.244266 6980 generic.go:334] "Generic (PLEG): container finished" podID="20217cff-2f81-4a56-9c15-28385c19258c" containerID="f380cb6aa96691042a8cede3619ef1bcaa412985b21e3cadd6963fc297c7968d" exitCode=1 Mar 13 12:48:49.244721 master-0 kubenswrapper[6980]: I0313 12:48:49.244344 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" event={"ID":"20217cff-2f81-4a56-9c15-28385c19258c","Type":"ContainerDied","Data":"f380cb6aa96691042a8cede3619ef1bcaa412985b21e3cadd6963fc297c7968d"} Mar 13 12:48:49.244969 master-0 kubenswrapper[6980]: I0313 12:48:49.244802 6980 scope.go:117] "RemoveContainer" containerID="f380cb6aa96691042a8cede3619ef1bcaa412985b21e3cadd6963fc297c7968d" Mar 13 12:48:49.250113 master-0 kubenswrapper[6980]: I0313 12:48:49.250090 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/1.log" Mar 13 12:48:49.250247 master-0 kubenswrapper[6980]: I0313 12:48:49.250155 6980 generic.go:334] "Generic (PLEG): container finished" podID="684c9067-189a-4f50-ac8d-97111aa73d9c" containerID="710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b" exitCode=0 Mar 13 12:48:49.250335 master-0 kubenswrapper[6980]: I0313 12:48:49.250254 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerDied","Data":"710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b"} Mar 13 12:48:49.250939 master-0 kubenswrapper[6980]: I0313 12:48:49.250803 6980 scope.go:117] "RemoveContainer" containerID="710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b" Mar 13 12:48:49.251048 master-0 kubenswrapper[6980]: E0313 12:48:49.251024 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-hsrbc_openshift-kube-apiserver-operator(684c9067-189a-4f50-ac8d-97111aa73d9c)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" podUID="684c9067-189a-4f50-ac8d-97111aa73d9c" Mar 13 12:48:49.262395 master-0 kubenswrapper[6980]: I0313 12:48:49.261553 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-mwnxf_5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/cluster-node-tuning-operator/0.log" Mar 13 12:48:49.262395 master-0 kubenswrapper[6980]: I0313 12:48:49.261612 6980 generic.go:334] "Generic (PLEG): container finished" podID="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" containerID="bac301547b48cdecb8c65de938d2eda1a0511b2e5a444761ea88edbc804c54a7" exitCode=1 Mar 13 12:48:49.262395 master-0 kubenswrapper[6980]: I0313 12:48:49.261677 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" event={"ID":"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346","Type":"ContainerDied","Data":"bac301547b48cdecb8c65de938d2eda1a0511b2e5a444761ea88edbc804c54a7"} Mar 13 12:48:49.262395 master-0 kubenswrapper[6980]: I0313 12:48:49.262078 6980 scope.go:117] "RemoveContainer" containerID="bac301547b48cdecb8c65de938d2eda1a0511b2e5a444761ea88edbc804c54a7" Mar 13 12:48:49.267760 master-0 kubenswrapper[6980]: I0313 12:48:49.266914 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-b52x8_3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/openshift-controller-manager-operator/0.log" Mar 13 12:48:49.267760 master-0 kubenswrapper[6980]: I0313 12:48:49.266967 6980 generic.go:334] "Generic (PLEG): container finished" podID="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" containerID="9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b" exitCode=0 Mar 13 12:48:49.267760 master-0 kubenswrapper[6980]: I0313 12:48:49.267021 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerDied","Data":"9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b"} Mar 13 12:48:49.267760 master-0 kubenswrapper[6980]: I0313 12:48:49.267469 6980 scope.go:117] "RemoveContainer" containerID="9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b" Mar 13 12:48:49.267760 master-0 kubenswrapper[6980]: E0313 12:48:49.267717 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-controller-manager-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=openshift-controller-manager-operator pod=openshift-controller-manager-operator-8565d84698-b52x8_openshift-controller-manager-operator(3f66dbf5-722f-4aed-becb-fb1b62ea7fe6)\"" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" podUID="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" Mar 13 12:48:49.278343 master-0 kubenswrapper[6980]: I0313 12:48:49.275980 6980 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e" exitCode=0 Mar 13 12:48:49.278343 master-0 kubenswrapper[6980]: I0313 12:48:49.276055 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e"} Mar 13 12:48:49.278343 master-0 kubenswrapper[6980]: I0313 12:48:49.276801 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:49.278343 master-0 kubenswrapper[6980]: I0313 12:48:49.276839 6980 scope.go:117] "RemoveContainer" containerID="5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e" Mar 13 12:48:49.554542 master-0 kubenswrapper[6980]: I0313 12:48:49.554484 6980 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:48:49.672594 master-0 kubenswrapper[6980]: I0313 12:48:49.672504 6980 scope.go:117] "RemoveContainer" containerID="cc996817afafd2df7fd421372b8e47516fdf24cdaea627bf1268ff842055a746" Mar 13 12:48:49.703853 master-0 kubenswrapper[6980]: I0313 12:48:49.703805 6980 scope.go:117] "RemoveContainer" containerID="d2c23685e01b04fc93d262aa5b6ebee8c573cd64c0296928ae13eaf96f993a18" Mar 13 12:48:49.864074 master-0 kubenswrapper[6980]: E0313 12:48:49.863982 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:50.301387 master-0 kubenswrapper[6980]: I0313 12:48:50.301301 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-w8b7h_20217cff-2f81-4a56-9c15-28385c19258c/package-server-manager/0.log" Mar 13 12:48:50.301795 master-0 kubenswrapper[6980]: I0313 12:48:50.301740 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" event={"ID":"20217cff-2f81-4a56-9c15-28385c19258c","Type":"ContainerStarted","Data":"b33d0dac69dedd6f948dd83d25c1da562fde06656a9b967f46cbca32564b94c7"} Mar 13 12:48:50.302203 master-0 kubenswrapper[6980]: I0313 12:48:50.302176 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:48:50.303664 master-0 kubenswrapper[6980]: I0313 12:48:50.303639 6980 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-mwnxf_5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/cluster-node-tuning-operator/0.log" Mar 13 12:48:50.303845 master-0 kubenswrapper[6980]: I0313 12:48:50.303815 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" event={"ID":"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346","Type":"ContainerStarted","Data":"1026863fe0cfa4b30cb427289517a8a746f8dd6d5a87355a375c538cc06ae73d"} Mar 13 12:48:50.316726 master-0 kubenswrapper[6980]: I0313 12:48:50.315616 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" event={"ID":"1e9803a4-a166-42dc-9498-57e213602684","Type":"ContainerStarted","Data":"28fb7b48f7920e0a5cc162de2e1fe78a8a26c56d848497dd82ca49d660a5fb06"} Mar 13 12:48:50.318954 master-0 kubenswrapper[6980]: I0313 12:48:50.318906 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" event={"ID":"16c2d774-967f-4964-ab4e-eb13c4364f63","Type":"ContainerStarted","Data":"20a66b9642396138b968a52a1eacae633e779f10a2a13e385f3fe594b5b1a4e2"} Mar 13 12:48:50.323403 master-0 kubenswrapper[6980]: I0313 12:48:50.323355 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" event={"ID":"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa","Type":"ContainerStarted","Data":"b636754f861cf04ae7e80a52d62d8d0663898c39c3b04cb631c04fc4021510f0"} Mar 13 12:48:50.327636 master-0 kubenswrapper[6980]: I0313 12:48:50.327602 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"dd9e5e8e374c81e1c66f6e45811bee38c8f529d7dd83812725266a3311710c8f"} Mar 13 12:48:50.327990 master-0 kubenswrapper[6980]: I0313 12:48:50.327953 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:50.328225 master-0 kubenswrapper[6980]: E0313 12:48:50.328194 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:51.337472 master-0 kubenswrapper[6980]: I0313 12:48:51.337393 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:51.338474 master-0 kubenswrapper[6980]: E0313 12:48:51.337711 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:52.689127 master-0 kubenswrapper[6980]: I0313 12:48:52.689027 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:48:52.689974 master-0 kubenswrapper[6980]: I0313 12:48:52.689939 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:52.690316 master-0 kubenswrapper[6980]: E0313 12:48:52.690271 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:54.337090 master-0 kubenswrapper[6980]: E0313 12:48:54.337012 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:48:54.795761 master-0 kubenswrapper[6980]: I0313 12:48:54.795678 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-retry-1-master-0"] Mar 13 12:48:54.799548 master-0 kubenswrapper[6980]: I0313 12:48:54.799335 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" podStartSLOduration=272.14555401 podStartE2EDuration="5m4.79927499s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.8210991 +0000 UTC m=+299.155093726" lastFinishedPulling="2026-03-13 12:44:24.47482008 +0000 UTC m=+331.808814706" observedRunningTime="2026-03-13 12:48:54.779786169 +0000 UTC m=+602.113780795" watchObservedRunningTime="2026-03-13 12:48:54.79927499 +0000 UTC m=+602.133269616" Mar 13 12:48:54.831453 master-0 kubenswrapper[6980]: I0313 12:48:54.831374 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7"] Mar 13 12:48:54.842145 master-0 kubenswrapper[6980]: I0313 12:48:54.842082 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-s85h7"] Mar 13 12:48:54.852993 master-0 kubenswrapper[6980]: I0313 12:48:54.852650 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6w8hd" podStartSLOduration=258.948899227 podStartE2EDuration="5m12.852630662s" podCreationTimestamp="2026-03-13 12:43:42 +0000 UTC" firstStartedPulling="2026-03-13 12:43:43.754603717 +0000 UTC m=+291.088598343" lastFinishedPulling="2026-03-13 12:44:37.658335162 +0000 UTC m=+344.992329778" observedRunningTime="2026-03-13 12:48:54.850340521 +0000 UTC m=+602.184335157" watchObservedRunningTime="2026-03-13 12:48:54.852630662 +0000 UTC m=+602.186625288" Mar 13 12:48:54.872733 master-0 kubenswrapper[6980]: I0313 12:48:54.872653 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" podStartSLOduration=270.342248025 podStartE2EDuration="5m4.872628519s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.735813288 +0000 UTC m=+299.069807914" lastFinishedPulling="2026-03-13 12:44:26.266193782 +0000 UTC m=+333.600188408" observedRunningTime="2026-03-13 12:48:54.872154814 +0000 UTC m=+602.206149440" watchObservedRunningTime="2026-03-13 12:48:54.872628519 +0000 UTC m=+602.206623145" Mar 13 12:48:54.894512 master-0 kubenswrapper[6980]: I0313 12:48:54.894160 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" path="/var/lib/kubelet/pods/5a3a7953-ad67-432a-a546-71a5d4450ddd/volumes" Mar 13 12:48:54.912466 master-0 kubenswrapper[6980]: I0313 12:48:54.911731 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-28fdg" podStartSLOduration=260.104781026 podStartE2EDuration="5m10.911714954s" podCreationTimestamp="2026-03-13 12:43:44 +0000 UTC" firstStartedPulling="2026-03-13 12:43:46.806135847 +0000 UTC m=+294.140130473" lastFinishedPulling="2026-03-13 12:44:37.613069775 +0000 UTC m=+344.947064401" observedRunningTime="2026-03-13 12:48:54.910905669 +0000 UTC m=+602.244900285" watchObservedRunningTime="2026-03-13 12:48:54.911714954 +0000 UTC m=+602.245709580" Mar 13 12:48:54.945776 master-0 kubenswrapper[6980]: I0313 12:48:54.939370 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" podStartSLOduration=270.780944622 podStartE2EDuration="5m4.93934953s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:52.10342556 +0000 UTC m=+299.437420186" lastFinishedPulling="2026-03-13 12:44:26.261830468 +0000 UTC m=+333.595825094" observedRunningTime="2026-03-13 12:48:54.93710796 +0000 UTC m=+602.271102586" watchObservedRunningTime="2026-03-13 12:48:54.93934953 +0000 UTC m=+602.273344156" Mar 13 12:48:54.977570 master-0 kubenswrapper[6980]: I0313 12:48:54.977440 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podStartSLOduration=270.447632017 podStartE2EDuration="5m4.977416483s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.745524415 +0000 UTC m=+299.079519041" lastFinishedPulling="2026-03-13 12:44:26.275308871 +0000 UTC m=+333.609303507" observedRunningTime="2026-03-13 12:48:54.959900135 +0000 UTC m=+602.293894771" watchObservedRunningTime="2026-03-13 12:48:54.977416483 +0000 UTC m=+602.311411119" Mar 13 12:48:54.978428 master-0 kubenswrapper[6980]: I0313 12:48:54.978359 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" podStartSLOduration=271.669729339 podStartE2EDuration="5m4.978323872s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.733652371 +0000 UTC m=+299.067646997" lastFinishedPulling="2026-03-13 12:44:25.042246904 +0000 UTC m=+332.376241530" observedRunningTime="2026-03-13 12:48:54.975519214 +0000 UTC m=+602.309513840" watchObservedRunningTime="2026-03-13 12:48:54.978323872 +0000 UTC m=+602.312318498" Mar 13 12:48:55.009694 master-0 kubenswrapper[6980]: I0313 12:48:55.009546 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podStartSLOduration=300.00952457 podStartE2EDuration="5m0.00952457s" podCreationTimestamp="2026-03-13 12:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:48:55.005175713 +0000 UTC m=+602.339170339" watchObservedRunningTime="2026-03-13 12:48:55.00952457 +0000 UTC m=+602.343519196" Mar 13 12:48:55.026623 master-0 kubenswrapper[6980]: I0313 12:48:55.026527 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6vng8" podStartSLOduration=259.155383596 podStartE2EDuration="5m13.026509922s" podCreationTimestamp="2026-03-13 12:43:42 +0000 UTC" firstStartedPulling="2026-03-13 12:43:43.75631187 +0000 UTC m=+291.090306496" lastFinishedPulling="2026-03-13 12:44:37.627438196 +0000 UTC m=+344.961432822" observedRunningTime="2026-03-13 12:48:55.023429446 +0000 UTC m=+602.357424072" watchObservedRunningTime="2026-03-13 12:48:55.026509922 +0000 UTC m=+602.360504548" Mar 13 12:48:55.044247 master-0 kubenswrapper[6980]: I0313 12:48:55.043458 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-92rsn" podStartSLOduration=259.127705718 podStartE2EDuration="5m12.043440973s" podCreationTimestamp="2026-03-13 12:43:43 +0000 UTC" firstStartedPulling="2026-03-13 12:43:44.764514938 +0000 UTC m=+292.098509564" lastFinishedPulling="2026-03-13 12:44:37.680250193 +0000 UTC m=+345.014244819" observedRunningTime="2026-03-13 12:48:55.042335458 +0000 UTC m=+602.376330084" watchObservedRunningTime="2026-03-13 12:48:55.043440973 +0000 UTC m=+602.377435589" Mar 13 12:48:55.060613 master-0 kubenswrapper[6980]: I0313 12:48:55.058635 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" podStartSLOduration=270.699342656 podStartE2EDuration="5m5.058614308s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.894656224 +0000 UTC m=+299.228650850" lastFinishedPulling="2026-03-13 12:44:26.253927866 +0000 UTC m=+333.587922502" observedRunningTime="2026-03-13 12:48:55.056179762 +0000 UTC m=+602.390174398" watchObservedRunningTime="2026-03-13 12:48:55.058614308 +0000 UTC m=+602.392608934" Mar 13 12:48:55.306490 master-0 kubenswrapper[6980]: I0313 12:48:55.305939 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7"] Mar 13 12:48:55.315023 master-0 kubenswrapper[6980]: I0313 12:48:55.314941 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-jdwm7"] Mar 13 12:48:55.333330 master-0 kubenswrapper[6980]: I0313 12:48:55.333072 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" podStartSLOduration=271.804755396 podStartE2EDuration="5m5.333018249s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.513978541 +0000 UTC m=+298.847973167" lastFinishedPulling="2026-03-13 12:44:25.042241384 +0000 UTC m=+332.376236020" observedRunningTime="2026-03-13 12:48:55.327039011 +0000 UTC m=+602.661033637" watchObservedRunningTime="2026-03-13 12:48:55.333018249 +0000 UTC m=+602.667012875" Mar 13 12:48:55.354389 master-0 kubenswrapper[6980]: I0313 12:48:55.354285 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" podStartSLOduration=271.02855687 podStartE2EDuration="5m5.354260765s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:43:51.953652121 +0000 UTC m=+299.287646747" lastFinishedPulling="2026-03-13 12:44:26.279356006 +0000 UTC m=+333.613350642" observedRunningTime="2026-03-13 12:48:55.349779994 +0000 UTC m=+602.683774620" watchObservedRunningTime="2026-03-13 12:48:55.354260765 +0000 UTC m=+602.688255391" Mar 13 12:48:55.373923 master-0 kubenswrapper[6980]: I0313 12:48:55.371187 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bc244427-5e4e-441c-a04d-f93aeca9b7c1","Type":"ContainerStarted","Data":"31033f934bf0a080278d866d51b314b3816b30909bafd1008ea255c440f36fb0"} Mar 13 12:48:55.373923 master-0 kubenswrapper[6980]: I0313 12:48:55.371244 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bc244427-5e4e-441c-a04d-f93aeca9b7c1","Type":"ContainerStarted","Data":"c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739"} Mar 13 12:48:55.563080 master-0 kubenswrapper[6980]: I0313 12:48:55.562881 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" podStartSLOduration=302.562857613 podStartE2EDuration="5m2.562857613s" podCreationTimestamp="2026-03-13 12:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:48:55.561451619 +0000 UTC m=+602.895446255" watchObservedRunningTime="2026-03-13 12:48:55.562857613 +0000 UTC m=+602.896852239" Mar 13 12:48:56.868048 master-0 kubenswrapper[6980]: I0313 12:48:56.867998 6980 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" path="/var/lib/kubelet/pods/b515c4c5-cec7-46d2-a435-1d46e26c30b8/volumes" Mar 13 12:48:58.479952 master-0 kubenswrapper[6980]: I0313 12:48:58.479399 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:48:58.480477 master-0 kubenswrapper[6980]: I0313 12:48:58.480035 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:48:58.480477 master-0 kubenswrapper[6980]: E0313 12:48:58.480250 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:48:59.890749 master-0 kubenswrapper[6980]: I0313 12:48:59.890658 6980 scope.go:117] "RemoveContainer" containerID="f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08" Mar 13 12:49:00.414637 master-0 kubenswrapper[6980]: I0313 12:49:00.414283 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" event={"ID":"73dc5747-2d30-4a2d-a784-1dea1e10811d","Type":"ContainerStarted","Data":"85b877f0b3743a9e04e4e0f4feabf45130b64a3d6cd9d0d271e9660a0a0b4f1c"} Mar 13 12:49:01.479312 master-0 kubenswrapper[6980]: I0313 12:49:01.479202 6980 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:02.862745 master-0 kubenswrapper[6980]: I0313 12:49:02.862690 6980 scope.go:117] "RemoveContainer" containerID="9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b" Mar 13 12:49:03.811349 master-0 kubenswrapper[6980]: I0313 12:49:03.811274 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" event={"ID":"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6","Type":"ContainerStarted","Data":"c9127263b62714ef7f9efef2b1ec76a435d1b95451bd897f0a65e9d0f2b70390"} Mar 13 12:49:03.859734 master-0 kubenswrapper[6980]: I0313 12:49:03.859640 6980 scope.go:117] "RemoveContainer" containerID="710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b" Mar 13 12:49:03.860032 master-0 kubenswrapper[6980]: E0313 12:49:03.859980 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-operator pod=kube-apiserver-operator-68bd585b-hsrbc_openshift-kube-apiserver-operator(684c9067-189a-4f50-ac8d-97111aa73d9c)\"" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" podUID="684c9067-189a-4f50-ac8d-97111aa73d9c" Mar 13 12:49:04.338271 master-0 kubenswrapper[6980]: E0313 12:49:04.338193 6980 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 12:49:04.635536 master-0 kubenswrapper[6980]: I0313 12:49:04.635435 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: E0313 12:49:04.635858 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="config-sync-controllers" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: I0313 12:49:04.635893 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="config-sync-controllers" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: E0313 12:49:04.635909 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="cluster-cloud-controller-manager" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: I0313 12:49:04.635916 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="cluster-cloud-controller-manager" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: E0313 12:49:04.635933 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="kube-rbac-proxy" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: I0313 12:49:04.635943 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="kube-rbac-proxy" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: E0313 12:49:04.635958 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="machine-approver-controller" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: I0313 12:49:04.635968 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="machine-approver-controller" Mar 13 12:49:04.635963 master-0 kubenswrapper[6980]: E0313 12:49:04.635978 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="kube-rbac-proxy" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.635987 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="kube-rbac-proxy" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: E0313 12:49:04.636002 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636011 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636182 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636215 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="kube-rbac-proxy" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636229 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5a3a7953-ad67-432a-a546-71a5d4450ddd" containerName="machine-approver-controller" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636241 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="kube-rbac-proxy" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636251 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="cluster-cloud-controller-manager" Mar 13 12:49:04.636437 master-0 kubenswrapper[6980]: I0313 12:49:04.636262 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="b515c4c5-cec7-46d2-a435-1d46e26c30b8" containerName="config-sync-controllers" Mar 13 12:49:04.636924 master-0 kubenswrapper[6980]: I0313 12:49:04.636869 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.640917 master-0 kubenswrapper[6980]: I0313 12:49:04.640860 6980 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7gz29" Mar 13 12:49:04.644261 master-0 kubenswrapper[6980]: I0313 12:49:04.644220 6980 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:49:04.655728 master-0 kubenswrapper[6980]: I0313 12:49:04.655685 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:04.693505 master-0 kubenswrapper[6980]: I0313 12:49:04.693416 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.693505 master-0 kubenswrapper[6980]: I0313 12:49:04.693488 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.693850 master-0 kubenswrapper[6980]: I0313 12:49:04.693533 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.794608 master-0 kubenswrapper[6980]: I0313 12:49:04.794510 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.795023 master-0 kubenswrapper[6980]: I0313 12:49:04.794997 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.795222 master-0 kubenswrapper[6980]: I0313 12:49:04.795204 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.795335 master-0 kubenswrapper[6980]: I0313 12:49:04.795283 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.795405 master-0 kubenswrapper[6980]: I0313 12:49:04.794703 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.812861 master-0 kubenswrapper[6980]: I0313 12:49:04.812803 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:04.964204 master-0 kubenswrapper[6980]: I0313 12:49:04.964035 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:05.392310 master-0 kubenswrapper[6980]: I0313 12:49:05.392242 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:05.827269 master-0 kubenswrapper[6980]: I0313 12:49:05.827174 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"80ceb0f9-67e4-4275-8532-85b6602367a2","Type":"ContainerStarted","Data":"c83ff937194332d291b1b5b800ca7831144c85fa708fce3eae5e12903a82439b"} Mar 13 12:49:05.827269 master-0 kubenswrapper[6980]: I0313 12:49:05.827247 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"80ceb0f9-67e4-4275-8532-85b6602367a2","Type":"ContainerStarted","Data":"dfda9ac962c72952dd338c0552968ea41c65cec9deb2da109d44fd46401c07be"} Mar 13 12:49:05.849088 master-0 kubenswrapper[6980]: I0313 12:49:05.848962 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=1.848915745 podStartE2EDuration="1.848915745s" podCreationTimestamp="2026-03-13 12:49:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:49:05.846453528 +0000 UTC m=+613.180448174" watchObservedRunningTime="2026-03-13 12:49:05.848915745 +0000 UTC m=+613.182910371" Mar 13 12:49:08.484872 master-0 kubenswrapper[6980]: I0313 12:49:08.484795 6980 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:08.485515 master-0 kubenswrapper[6980]: I0313 12:49:08.485414 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:49:08.485684 master-0 kubenswrapper[6980]: E0313 12:49:08.485649 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:08.490063 master-0 kubenswrapper[6980]: I0313 12:49:08.490006 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:08.848719 master-0 kubenswrapper[6980]: I0313 12:49:08.848528 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:49:08.848949 master-0 kubenswrapper[6980]: E0313 12:49:08.848892 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:11.644880 master-0 kubenswrapper[6980]: I0313 12:49:11.644772 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:11.646310 master-0 kubenswrapper[6980]: I0313 12:49:11.646205 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-4-master-0" podUID="80ceb0f9-67e4-4275-8532-85b6602367a2" containerName="installer" containerID="cri-o://c83ff937194332d291b1b5b800ca7831144c85fa708fce3eae5e12903a82439b" gracePeriod=30 Mar 13 12:49:13.650110 master-0 kubenswrapper[6980]: I0313 12:49:13.650044 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:49:13.651054 master-0 kubenswrapper[6980]: I0313 12:49:13.651023 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.667707 master-0 kubenswrapper[6980]: I0313 12:49:13.667631 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:49:13.821067 master-0 kubenswrapper[6980]: I0313 12:49:13.820029 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.821067 master-0 kubenswrapper[6980]: I0313 12:49:13.820335 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.821067 master-0 kubenswrapper[6980]: I0313 12:49:13.820534 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.921551 master-0 kubenswrapper[6980]: I0313 12:49:13.921394 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.921551 master-0 kubenswrapper[6980]: I0313 12:49:13.921477 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.922102 master-0 kubenswrapper[6980]: I0313 12:49:13.922061 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.922152 master-0 kubenswrapper[6980]: I0313 12:49:13.922076 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.926305 master-0 kubenswrapper[6980]: I0313 12:49:13.926233 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:13.947358 master-0 kubenswrapper[6980]: I0313 12:49:13.947302 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:14.006088 master-0 kubenswrapper[6980]: I0313 12:49:14.006005 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:14.452300 master-0 kubenswrapper[6980]: I0313 12:49:14.452100 6980 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 12:49:14.462559 master-0 kubenswrapper[6980]: W0313 12:49:14.462501 6980 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod07ccaa2e_0cf2_4205_b1e7_0d5b9d5fe4da.slice/crio-72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651 WatchSource:0}: Error finding container 72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651: Status 404 returned error can't find the container with id 72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651 Mar 13 12:49:14.890139 master-0 kubenswrapper[6980]: I0313 12:49:14.890047 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da","Type":"ContainerStarted","Data":"72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651"} Mar 13 12:49:15.900476 master-0 kubenswrapper[6980]: I0313 12:49:15.900355 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da","Type":"ContainerStarted","Data":"0d4bb79902a72b9f34162023ea867b8ebd9dc8bf3badc80d03372122dc90b2a4"} Mar 13 12:49:15.921173 master-0 kubenswrapper[6980]: I0313 12:49:15.921016 6980 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.92097003 podStartE2EDuration="2.92097003s" podCreationTimestamp="2026-03-13 12:49:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:49:15.918378479 +0000 UTC m=+623.252373125" watchObservedRunningTime="2026-03-13 12:49:15.92097003 +0000 UTC m=+623.254964656" Mar 13 12:49:18.860311 master-0 kubenswrapper[6980]: I0313 12:49:18.860246 6980 scope.go:117] "RemoveContainer" containerID="710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b" Mar 13 12:49:19.935927 master-0 kubenswrapper[6980]: I0313 12:49:19.935855 6980 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" event={"ID":"684c9067-189a-4f50-ac8d-97111aa73d9c","Type":"ContainerStarted","Data":"d06c2189bc972b8690ec9bca014f61e9205280e78ce41d16d8e01c1644e5e70e"} Mar 13 12:49:19.948745 master-0 kubenswrapper[6980]: I0313 12:49:19.948693 6980 scope.go:117] "RemoveContainer" containerID="6b009be90010b458906ee5384812043c64b344c57f3d33c0327bca957e554f6b" Mar 13 12:49:19.965563 master-0 kubenswrapper[6980]: I0313 12:49:19.965511 6980 scope.go:117] "RemoveContainer" containerID="aa937531213df9edca1f974017f8219d25e8981234f54f6bab6be21f0713fc0c" Mar 13 12:49:19.998872 master-0 kubenswrapper[6980]: I0313 12:49:19.998833 6980 scope.go:117] "RemoveContainer" containerID="1deefa2eed04097ebe852cdcfbe526eeadec29031bfced962671dccee87c51d9" Mar 13 12:49:20.014794 master-0 kubenswrapper[6980]: I0313 12:49:20.014743 6980 scope.go:117] "RemoveContainer" containerID="fb14b7f25225651cce5060024dd96fe2745167fe14059c382213bb9bcb069656" Mar 13 12:49:20.037493 master-0 kubenswrapper[6980]: I0313 12:49:20.037397 6980 scope.go:117] "RemoveContainer" containerID="5174065d158bac4c4f8df59a6fd09da4b437cfcdb6c1e02c2fa3d32ae43403ab" Mar 13 12:49:20.899564 master-0 kubenswrapper[6980]: I0313 12:49:20.899160 6980 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:22.863472 master-0 kubenswrapper[6980]: I0313 12:49:22.863410 6980 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:49:22.864105 master-0 kubenswrapper[6980]: E0313 12:49:22.863728 6980 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:26.479016 master-0 kubenswrapper[6980]: I0313 12:49:26.478922 6980 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:49:26.479943 master-0 kubenswrapper[6980]: I0313 12:49:26.479038 6980 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:49:30.860515 master-0 kubenswrapper[6980]: I0313 12:49:30.860432 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:49:30.860515 master-0 kubenswrapper[6980]: I0313 12:49:30.860516 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:49:30.963873 master-0 kubenswrapper[6980]: I0313 12:49:30.963523 6980 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:49:30.970195 master-0 kubenswrapper[6980]: I0313 12:49:30.970144 6980 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:30.977362 master-0 kubenswrapper[6980]: I0313 12:49:30.977273 6980 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:49:30.992256 master-0 kubenswrapper[6980]: I0313 12:49:30.991768 6980 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 12:49:31.000277 master-0 kubenswrapper[6980]: I0313 12:49:31.000218 6980 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:49:31.000277 master-0 kubenswrapper[6980]: I0313 12:49:31.000256 6980 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2942fd90-73c5-44f5-92a6-b9f1150a0516" Mar 13 12:49:32.910199 master-0 kubenswrapper[6980]: E0313 12:49:32.910138 6980 file.go:109] "Unable to process watch event" err="can't process config file \"/etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml\": /etc/kubernetes/manifests/kube-apiserver-startup-monitor-pod.yaml: couldn't parse as pod(Object 'Kind' is missing in 'null'), please check config file" Mar 13 12:49:32.911824 master-0 kubenswrapper[6980]: I0313 12:49:32.911787 6980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:49:32.913143 master-0 kubenswrapper[6980]: I0313 12:49:32.912996 6980 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:49:32.913477 master-0 kubenswrapper[6980]: I0313 12:49:32.913444 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:32.913625 master-0 kubenswrapper[6980]: I0313 12:49:32.913459 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://9cc438a36a13c0e2e1f239bcab312b0eda7119d2153cef22f48639612d94c13e" gracePeriod=15 Mar 13 12:49:32.913717 master-0 kubenswrapper[6980]: I0313 12:49:32.913672 6980 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:49:32.913780 master-0 kubenswrapper[6980]: I0313 12:49:32.913485 6980 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://63e03be6775769ad765af20dfd2ac68f1e500a160a4e77eda15bd7fdcfe1bc2a" gracePeriod=15 Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: E0313 12:49:32.914428 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914468 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: E0313 12:49:32.914493 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914500 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: E0313 12:49:32.914513 6980 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914519 6980 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914680 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914706 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:49:32.915030 master-0 kubenswrapper[6980]: I0313 12:49:32.914717 6980 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:49:32.916473 master-0 kubenswrapper[6980]: I0313 12:49:32.916447 6980 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:32.979763 master-0 kubenswrapper[6980]: E0313 12:49:32.979685 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050093 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050166 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050339 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050390 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050486 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050531 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050569 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.050677 master-0 kubenswrapper[6980]: I0313 12:49:33.050652 6980 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.058893 master-0 kubenswrapper[6980]: E0313 12:49:33.058824 6980 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152381 master-0 kubenswrapper[6980]: I0313 12:49:33.152257 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152381 master-0 kubenswrapper[6980]: I0313 12:49:33.152388 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152441 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152460 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152502 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152539 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152555 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152598 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152648 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152723 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152765 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152791 6980 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152833 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152821 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152873 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.152934 master-0 kubenswrapper[6980]: I0313 12:49:33.152926 6980 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:33.275533 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 12:49:33.311294 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 12:49:33.311631 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 12:49:33.312551 master-0 systemd[1]: kubelet.service: Consumed 1min 22.798s CPU time. Mar 13 12:49:33.346497 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 12:49:33.502239 master-0 kubenswrapper[19715]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 12:49:33.503287 master-0 kubenswrapper[19715]: I0313 12:49:33.502362 19715 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 12:49:33.505720 master-0 kubenswrapper[19715]: W0313 12:49:33.505694 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:49:33.505720 master-0 kubenswrapper[19715]: W0313 12:49:33.505714 19715 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:49:33.505720 master-0 kubenswrapper[19715]: W0313 12:49:33.505722 19715 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:49:33.505720 master-0 kubenswrapper[19715]: W0313 12:49:33.505728 19715 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505735 19715 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505741 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505747 19715 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505753 19715 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505758 19715 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505768 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505774 19715 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505778 19715 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505783 19715 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505789 19715 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505794 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505800 19715 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505807 19715 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505815 19715 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505820 19715 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505826 19715 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505833 19715 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505838 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505843 19715 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:49:33.505930 master-0 kubenswrapper[19715]: W0313 12:49:33.505848 19715 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505853 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505858 19715 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505863 19715 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505868 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505872 19715 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505876 19715 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505880 19715 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505884 19715 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505889 19715 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505894 19715 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505900 19715 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505907 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505914 19715 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505919 19715 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505924 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505930 19715 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505935 19715 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505942 19715 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:49:33.507094 master-0 kubenswrapper[19715]: W0313 12:49:33.505948 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505953 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505958 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505963 19715 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505968 19715 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505972 19715 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505977 19715 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505981 19715 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505986 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505991 19715 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.505996 19715 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506001 19715 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506006 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506010 19715 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506015 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506020 19715 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506026 19715 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506044 19715 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506054 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506064 19715 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506250 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:49:33.508322 master-0 kubenswrapper[19715]: W0313 12:49:33.506271 19715 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506277 19715 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506282 19715 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506287 19715 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506294 19715 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506300 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506306 19715 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506312 19715 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: W0313 12:49:33.506318 19715 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506485 19715 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506500 19715 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506509 19715 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506517 19715 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506524 19715 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506530 19715 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506537 19715 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506548 19715 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506555 19715 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506560 19715 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506567 19715 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506595 19715 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506602 19715 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 12:49:33.509470 master-0 kubenswrapper[19715]: I0313 12:49:33.506608 19715 flags.go:64] FLAG: --cgroup-root="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506613 19715 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506618 19715 flags.go:64] FLAG: --client-ca-file="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506623 19715 flags.go:64] FLAG: --cloud-config="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506629 19715 flags.go:64] FLAG: --cloud-provider="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506636 19715 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506657 19715 flags.go:64] FLAG: --cluster-domain="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506667 19715 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506678 19715 flags.go:64] FLAG: --config-dir="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506684 19715 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506690 19715 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506697 19715 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506703 19715 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506708 19715 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506714 19715 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506726 19715 flags.go:64] FLAG: --contention-profiling="false" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506732 19715 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506737 19715 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506743 19715 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506748 19715 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506755 19715 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506760 19715 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506766 19715 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506771 19715 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506776 19715 flags.go:64] FLAG: --enable-server="true" Mar 13 12:49:33.510557 master-0 kubenswrapper[19715]: I0313 12:49:33.506781 19715 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506789 19715 flags.go:64] FLAG: --event-burst="100" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506796 19715 flags.go:64] FLAG: --event-qps="50" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506801 19715 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506807 19715 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506814 19715 flags.go:64] FLAG: --eviction-hard="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506822 19715 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506828 19715 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506844 19715 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506850 19715 flags.go:64] FLAG: --eviction-soft="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506855 19715 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506860 19715 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506866 19715 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506880 19715 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506890 19715 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506901 19715 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506906 19715 flags.go:64] FLAG: --feature-gates="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506914 19715 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506920 19715 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506927 19715 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506934 19715 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506940 19715 flags.go:64] FLAG: --healthz-port="10248" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506951 19715 flags.go:64] FLAG: --help="false" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506956 19715 flags.go:64] FLAG: --hostname-override="" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506961 19715 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506966 19715 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 12:49:33.512253 master-0 kubenswrapper[19715]: I0313 12:49:33.506972 19715 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.506977 19715 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.506982 19715 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.506987 19715 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.506993 19715 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.506998 19715 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507003 19715 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507008 19715 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507014 19715 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507019 19715 flags.go:64] FLAG: --kube-reserved="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507024 19715 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507029 19715 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507034 19715 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507039 19715 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507044 19715 flags.go:64] FLAG: --lock-file="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507049 19715 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507065 19715 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507078 19715 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507093 19715 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507104 19715 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507110 19715 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507115 19715 flags.go:64] FLAG: --logging-format="text" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507120 19715 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507126 19715 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507132 19715 flags.go:64] FLAG: --manifest-url="" Mar 13 12:49:33.514338 master-0 kubenswrapper[19715]: I0313 12:49:33.507137 19715 flags.go:64] FLAG: --manifest-url-header="" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507144 19715 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507150 19715 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507159 19715 flags.go:64] FLAG: --max-pods="110" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507165 19715 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507171 19715 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507177 19715 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507182 19715 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507188 19715 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507193 19715 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507199 19715 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507212 19715 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507218 19715 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507224 19715 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507229 19715 flags.go:64] FLAG: --pod-cidr="" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507234 19715 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507241 19715 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507245 19715 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507250 19715 flags.go:64] FLAG: --pods-per-core="0" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507254 19715 flags.go:64] FLAG: --port="10250" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507259 19715 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507263 19715 flags.go:64] FLAG: --provider-id="" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507267 19715 flags.go:64] FLAG: --qos-reserved="" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507272 19715 flags.go:64] FLAG: --read-only-port="10255" Mar 13 12:49:33.516063 master-0 kubenswrapper[19715]: I0313 12:49:33.507285 19715 flags.go:64] FLAG: --register-node="true" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507289 19715 flags.go:64] FLAG: --register-schedulable="true" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507297 19715 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507306 19715 flags.go:64] FLAG: --registry-burst="10" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507310 19715 flags.go:64] FLAG: --registry-qps="5" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507314 19715 flags.go:64] FLAG: --reserved-cpus="" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507318 19715 flags.go:64] FLAG: --reserved-memory="" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507328 19715 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507333 19715 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507342 19715 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507346 19715 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507350 19715 flags.go:64] FLAG: --runonce="false" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507354 19715 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507359 19715 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507363 19715 flags.go:64] FLAG: --seccomp-default="false" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507368 19715 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507372 19715 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507377 19715 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507381 19715 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507386 19715 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507390 19715 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507395 19715 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507399 19715 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507403 19715 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507407 19715 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 12:49:33.517717 master-0 kubenswrapper[19715]: I0313 12:49:33.507411 19715 flags.go:64] FLAG: --system-cgroups="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507415 19715 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507422 19715 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507426 19715 flags.go:64] FLAG: --tls-cert-file="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507430 19715 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507437 19715 flags.go:64] FLAG: --tls-min-version="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507442 19715 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507446 19715 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507450 19715 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507454 19715 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507458 19715 flags.go:64] FLAG: --v="2" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507464 19715 flags.go:64] FLAG: --version="false" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507470 19715 flags.go:64] FLAG: --vmodule="" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507475 19715 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: I0313 12:49:33.507479 19715 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507608 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507615 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507620 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507626 19715 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507630 19715 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507634 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507638 19715 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507642 19715 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:49:33.519827 master-0 kubenswrapper[19715]: W0313 12:49:33.507646 19715 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507650 19715 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507654 19715 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507658 19715 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507663 19715 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507668 19715 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507673 19715 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507679 19715 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507684 19715 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507688 19715 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507692 19715 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507697 19715 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507701 19715 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507704 19715 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507708 19715 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507712 19715 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507715 19715 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507719 19715 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507767 19715 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507774 19715 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:49:33.521267 master-0 kubenswrapper[19715]: W0313 12:49:33.507779 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507784 19715 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507789 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507793 19715 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507798 19715 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507802 19715 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507828 19715 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507834 19715 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507838 19715 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507843 19715 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507848 19715 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507852 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507857 19715 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507862 19715 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507872 19715 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507877 19715 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507881 19715 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507886 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507890 19715 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507896 19715 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:49:33.522786 master-0 kubenswrapper[19715]: W0313 12:49:33.507902 19715 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507908 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507912 19715 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507923 19715 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507927 19715 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507931 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507936 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507941 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507946 19715 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507952 19715 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507959 19715 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507965 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507969 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507974 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507979 19715 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507984 19715 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507989 19715 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507993 19715 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.507998 19715 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:49:33.523492 master-0 kubenswrapper[19715]: W0313 12:49:33.508002 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.508007 19715 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.508011 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.508015 19715 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.508019 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: I0313 12:49:33.508043 19715 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: I0313 12:49:33.515146 19715 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: I0313 12:49:33.515213 19715 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515307 19715 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515315 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515320 19715 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515324 19715 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515329 19715 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515333 19715 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515337 19715 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:49:33.524314 master-0 kubenswrapper[19715]: W0313 12:49:33.515341 19715 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515345 19715 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515349 19715 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515354 19715 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515366 19715 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515374 19715 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515378 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515386 19715 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515390 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515394 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515397 19715 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515401 19715 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515405 19715 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515409 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515412 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515416 19715 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515424 19715 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515428 19715 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515432 19715 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515435 19715 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:49:33.524961 master-0 kubenswrapper[19715]: W0313 12:49:33.515439 19715 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.515746 19715 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.515752 19715 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517920 19715 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517939 19715 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517945 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517949 19715 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517953 19715 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517957 19715 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517960 19715 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517966 19715 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517973 19715 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517977 19715 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517981 19715 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517985 19715 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.517989 19715 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.518001 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.518009 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.518013 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:49:33.525796 master-0 kubenswrapper[19715]: W0313 12:49:33.518017 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518021 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518029 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518033 19715 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518037 19715 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518041 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518044 19715 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518048 19715 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518052 19715 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518056 19715 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518061 19715 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518066 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518070 19715 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518074 19715 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518078 19715 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518082 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518086 19715 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518090 19715 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518094 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:49:33.526502 master-0 kubenswrapper[19715]: W0313 12:49:33.518099 19715 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518105 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518110 19715 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518114 19715 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518118 19715 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518122 19715 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518133 19715 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: I0313 12:49:33.518145 19715 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518253 19715 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518265 19715 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518270 19715 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518277 19715 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518281 19715 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518285 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518289 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 12:49:33.527123 master-0 kubenswrapper[19715]: W0313 12:49:33.518293 19715 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518296 19715 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518300 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518303 19715 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518308 19715 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518315 19715 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518320 19715 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518325 19715 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518329 19715 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518334 19715 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518338 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518342 19715 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518345 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518349 19715 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518353 19715 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518356 19715 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518360 19715 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518363 19715 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518367 19715 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 12:49:33.528110 master-0 kubenswrapper[19715]: W0313 12:49:33.518371 19715 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518374 19715 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518378 19715 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518382 19715 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518386 19715 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518397 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518400 19715 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518408 19715 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518412 19715 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518415 19715 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518423 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518427 19715 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518430 19715 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518434 19715 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518438 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518445 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518449 19715 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518454 19715 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518458 19715 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518462 19715 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 12:49:33.529073 master-0 kubenswrapper[19715]: W0313 12:49:33.518468 19715 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518472 19715 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518477 19715 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518480 19715 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518484 19715 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518488 19715 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518492 19715 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518496 19715 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518500 19715 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518503 19715 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518507 19715 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518511 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518516 19715 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518520 19715 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518524 19715 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518528 19715 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518532 19715 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518537 19715 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518541 19715 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518546 19715 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 12:49:33.530011 master-0 kubenswrapper[19715]: W0313 12:49:33.518550 19715 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: W0313 12:49:33.518554 19715 feature_gate.go:330] unrecognized feature gate: Example Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: W0313 12:49:33.518557 19715 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: W0313 12:49:33.518561 19715 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: W0313 12:49:33.518565 19715 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: W0313 12:49:33.518569 19715 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.518590 19715 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.518791 19715 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.522727 19715 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.522883 19715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.523195 19715 server.go:997] "Starting client certificate rotation" Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.523216 19715 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.523396 19715 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 07:15:51.255278562 +0000 UTC Mar 13 12:49:33.531328 master-0 kubenswrapper[19715]: I0313 12:49:33.523513 19715 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 18h26m17.731768221s for next certificate rotation Mar 13 12:49:33.532416 master-0 kubenswrapper[19715]: I0313 12:49:33.524312 19715 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:49:33.532416 master-0 kubenswrapper[19715]: I0313 12:49:33.526156 19715 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:49:33.532416 master-0 kubenswrapper[19715]: I0313 12:49:33.529799 19715 log.go:25] "Validated CRI v1 runtime API" Mar 13 12:49:33.536101 master-0 kubenswrapper[19715]: I0313 12:49:33.536006 19715 log.go:25] "Validated CRI v1 image API" Mar 13 12:49:33.537308 master-0 kubenswrapper[19715]: I0313 12:49:33.537265 19715 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 12:49:33.545528 master-0 kubenswrapper[19715]: I0313 12:49:33.545459 19715 fs.go:135] Filesystem UUIDs: map[1540ec0a-5f02-47ef-9901-1615d58a2814:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 12:49:33.546325 master-0 kubenswrapper[19715]: I0313 12:49:33.545512 19715 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8/userdata/shm major:0 minor:787 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm major:0 minor:267 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm major:0 minor:270 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0/userdata/shm major:0 minor:546 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6/userdata/shm major:0 minor:482 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96/userdata/shm major:0 minor:324 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm major:0 minor:268 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f485ea5123a1d0182412387178e57b07dfd142ef3af3f80ba71084ac36459bd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f485ea5123a1d0182412387178e57b07dfd142ef3af3f80ba71084ac36459bd/userdata/shm major:0 minor:409 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm major:0 minor:249 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/310bb063b58a9159851ef88dd90cde60bf53039832d7c07feba8d470bdfa8768/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/310bb063b58a9159851ef88dd90cde60bf53039832d7c07feba8d470bdfa8768/userdata/shm major:0 minor:310 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm major:0 minor:148 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4801a1906a7001eae337b963c9facf81446c4cb5eb428077e46f31714758e82d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4801a1906a7001eae337b963c9facf81446c4cb5eb428077e46f31714758e82d/userdata/shm major:0 minor:536 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4a1a1ca2f1f627a9edd53099939af120013911bcf17806e1f6a21cd1517caec4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4a1a1ca2f1f627a9edd53099939af120013911bcf17806e1f6a21cd1517caec4/userdata/shm major:0 minor:345 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53/userdata/shm major:0 minor:304 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae/userdata/shm major:0 minor:348 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3/userdata/shm major:0 minor:617 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm major:0 minor:130 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917/userdata/shm major:0 minor:734 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651/userdata/shm major:0 minor:954 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72c8417a873fd1b85ceced7f871125b403b5b588edc21a1d386d6970721625a8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72c8417a873fd1b85ceced7f871125b403b5b588edc21a1d386d6970721625a8/userdata/shm major:0 minor:784 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm major:0 minor:272 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7a24aff88a2b33793c90602bd0f46317c68b5e2becc49d106f2e8cd82fff29f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7a24aff88a2b33793c90602bd0f46317c68b5e2becc49d106f2e8cd82fff29f4/userdata/shm major:0 minor:746 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/824d5e18774211ffd65269e6c76a79cffc7294bc9b558c91abfddb9b02e76444/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/824d5e18774211ffd65269e6c76a79cffc7294bc9b558c91abfddb9b02e76444/userdata/shm major:0 minor:806 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/8c72f4222c0466238ecef6497355ca369f8bfcd600621df230959caf510fb4c4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/8c72f4222c0466238ecef6497355ca369f8bfcd600621df230959caf510fb4c4/userdata/shm major:0 minor:779 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785/userdata/shm major:0 minor:165 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0/userdata/shm major:0 minor:766 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/99e9d3fc7152ff7bfdbd97007d95913bd72cfac57cdb379fde935a1b0b89854a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/99e9d3fc7152ff7bfdbd97007d95913bd72cfac57cdb379fde935a1b0b89854a/userdata/shm major:0 minor:795 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a7ea7f8a7c14a4770bc974d998f5bd5daace368d7b2428f8320ae10321a074ac/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a7ea7f8a7c14a4770bc974d998f5bd5daace368d7b2428f8320ae10321a074ac/userdata/shm major:0 minor:539 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162/userdata/shm major:0 minor:174 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm major:0 minor:258 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/adad8c1ef5c4b589ed8b1cb34f6484ca79dbaffdd4f786714ba25a8f28ac7eaf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/adad8c1ef5c4b589ed8b1cb34f6484ca79dbaffdd4f786714ba25a8f28ac7eaf/userdata/shm major:0 minor:797 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6/userdata/shm major:0 minor:366 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm major:0 minor:248 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c/userdata/shm major:0 minor:401 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm major:0 minor:256 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm major:0 minor:44 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b833b4c44fc7671aadf2bbf7695850b67cef941ee23693e9e8acaa00999b3a13/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b833b4c44fc7671aadf2bbf7695850b67cef941ee23693e9e8acaa00999b3a13/userdata/shm major:0 minor:655 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce/userdata/shm major:0 minor:308 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70/userdata/shm major:0 minor:790 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bd48d4fa30aeda024af9d88b2a92ab9f3ad6a982cbd20ba4d8bca985b63c0b34/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bd48d4fa30aeda024af9d88b2a92ab9f3ad6a982cbd20ba4d8bca985b63c0b34/userdata/shm major:0 minor:75 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm major:0 minor:252 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739/userdata/shm major:0 minor:77 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01/userdata/shm major:0 minor:346 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5/userdata/shm major:0 minor:477 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm major:0 minor:100 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b/userdata/shm major:0 minor:473 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cce07379c81de2caa56b921b64dd3ee63be30f56bcec066d326de0a8f136d5b8/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cce07379c81de2caa56b921b64dd3ee63be30f56bcec066d326de0a8f136d5b8/userdata/shm major:0 minor:792 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cef5b900e1661977211454ffc9aaadd8fa1b91ab51948137171cbc32a2dba7c7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cef5b900e1661977211454ffc9aaadd8fa1b91ab51948137171cbc32a2dba7c7/userdata/shm major:0 minor:312 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3/userdata/shm major:0 minor:789 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d30550f78f634355f75a46b81834746cb5b11fa2ba553146cdee3bed2ae12ebf/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d30550f78f634355f75a46b81834746cb5b11fa2ba553146cdee3bed2ae12ebf/userdata/shm major:0 minor:764 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d37419779a5a99d07a8431d2c7b74e48bacfbaba667a5ee5762a54d36c0f1cf1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d37419779a5a99d07a8431d2c7b74e48bacfbaba667a5ee5762a54d36c0f1cf1/userdata/shm major:0 minor:612 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300/userdata/shm major:0 minor:635 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d7a36fdd0b153d8fdb4540b3fcd458052672e0226aedc009e1ca191a106ed499/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d7a36fdd0b153d8fdb4540b3fcd458052672e0226aedc009e1ca191a106ed499/userdata/shm major:0 minor:326 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/dfda9ac962c72952dd338c0552968ea41c65cec9deb2da109d44fd46401c07be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/dfda9ac962c72952dd338c0552968ea41c65cec9deb2da109d44fd46401c07be/userdata/shm major:0 minor:943 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea/userdata/shm major:0 minor:506 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm major:0 minor:261 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff/userdata/shm major:0 minor:537 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm major:0 minor:265 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~projected/kube-api-access-55v4q:{mountpoint:/var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~projected/kube-api-access-55v4q major:0 minor:771 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:772 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da/volumes/kubernetes.io~projected/kube-api-access major:0 minor:949 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~projected/kube-api-access-vgd4v:{mountpoint:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~projected/kube-api-access-vgd4v major:0 minor:428 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/encryption-config major:0 minor:427 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/etcd-client major:0 minor:426 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/serving-cert major:0 minor:633 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~projected/kube-api-access-dg5p4:{mountpoint:/var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~projected/kube-api-access-dg5p4 major:0 minor:757 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~secret/serving-cert major:0 minor:754 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/volumes/kubernetes.io~projected/kube-api-access-9wqpz:{mountpoint:/var/lib/kubelet/pods/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/volumes/kubernetes.io~projected/kube-api-access-9wqpz major:0 minor:404 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~projected/kube-api-access-kvn5d:{mountpoint:/var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~projected/kube-api-access-kvn5d major:0 minor:778 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~secret/proxy-tls major:0 minor:769 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq:{mountpoint:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:534 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh major:0 minor:153 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:129 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~projected/kube-api-access-4vqww:{mountpoint:/var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~projected/kube-api-access-4vqww major:0 minor:416 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~secret/signing-key major:0 minor:415 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm:{mountpoint:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:52 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr:{mountpoint:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~secret/srv-cert major:0 minor:51 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz:{mountpoint:/var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz major:0 minor:260 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~projected/kube-api-access-lm4d2:{mountpoint:/var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~projected/kube-api-access-lm4d2 major:0 minor:758 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:756 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp:{mountpoint:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh:{mountpoint:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb:{mountpoint:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~secret/webhook-certs major:0 minor:50 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt:{mountpoint:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt major:0 minor:242 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5623ea13-a34b-4510-8902-341912d115df/volumes/kubernetes.io~projected/kube-api-access-q9tpt:{mountpoint:/var/lib/kubelet/pods/5623ea13-a34b-4510-8902-341912d115df/volumes/kubernetes.io~projected/kube-api-access-q9tpt major:0 minor:743 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h:{mountpoint:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:532 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:531 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:565 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/tmp major:0 minor:601 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~projected/kube-api-access-64w7v:{mountpoint:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~projected/kube-api-access-64w7v major:0 minor:602 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn:{mountpoint:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~secret/metrics-certs major:0 minor:64 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~projected/kube-api-access-cscxl:{mountpoint:/var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~projected/kube-api-access-cscxl major:0 minor:759 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:755 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2:{mountpoint:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2 major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx:{mountpoint:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~projected/kube-api-access-s7cgb:{mountpoint:/var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~projected/kube-api-access-s7cgb major:0 minor:760 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~secret/cert major:0 minor:569 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j:{mountpoint:/var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j major:0 minor:115 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm:{mountpoint:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:62 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/70c8b79e-4d29-4ae2-a24f-68595d942442/volumes/kubernetes.io~projected/kube-api-access-bk8kt:{mountpoint:/var/lib/kubelet/pods/70c8b79e-4d29-4ae2-a24f-68595d942442/volumes/kubernetes.io~projected/kube-api-access-bk8kt major:0 minor:303 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht:{mountpoint:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:53 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/730e1f43-39b7-41de-ac81-270966725477/volumes/kubernetes.io~projected/kube-api-access-2vt8r:{mountpoint:/var/lib/kubelet/pods/730e1f43-39b7-41de-ac81-270966725477/volumes/kubernetes.io~projected/kube-api-access-2vt8r major:0 minor:733 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~projected/kube-api-access-6vg7m:{mountpoint:/var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~projected/kube-api-access-6vg7m major:0 minor:302 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~secret/serving-cert major:0 minor:301 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld:{mountpoint:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~projected/kube-api-access-cv745:{mountpoint:/var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~projected/kube-api-access-cv745 major:0 minor:776 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:775 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~projected/kube-api-access-hpjj6:{mountpoint:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~projected/kube-api-access-hpjj6 major:0 minor:469 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/encryption-config major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/etcd-client major:0 minor:468 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/serving-cert major:0 minor:472 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80ceb0f9-67e4-4275-8532-85b6602367a2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/80ceb0f9-67e4-4275-8532-85b6602367a2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:730 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc:{mountpoint:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~secret/srv-cert major:0 minor:63 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~projected/kube-api-access-qtpqk:{mountpoint:/var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~projected/kube-api-access-qtpqk major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~secret/metrics-tls major:0 minor:607 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~projected/kube-api-access-wwjh6:{mountpoint:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~projected/kube-api-access-wwjh6 major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:783 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/webhook-cert major:0 minor:533 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb:{mountpoint:/var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb major:0 minor:241 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/ca-certs major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/kube-api-access-w97j5:{mountpoint:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/kube-api-access-w97j5 major:0 minor:420 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:481 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9:{mountpoint:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9 major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6a9184d-0557-4e61-bf31-6dd69c0dfb15/volumes/kubernetes.io~projected/kube-api-access-djchk:{mountpoint:/var/lib/kubelet/pods/b6a9184d-0557-4e61-bf31-6dd69c0dfb15/volumes/kubernetes.io~projected/kube-api-access-djchk major:0 minor:634 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~projected/kube-api-access-2894g:{mountpoint:/var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~projected/kube-api-access-2894g major:0 minor:765 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:761 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bc244427-5e4e-441c-a04d-f93aeca9b7c1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/bc244427-5e4e-441c-a04d-f93aeca9b7c1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:869 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8:{mountpoint:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~secret/metrics-tls major:0 minor:527 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx:{mountpoint:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx major:0 minor:140 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert major:0 minor:147 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/cf9f90f5-643f-41e8-a886-7d19fb064afc/volumes/kubernetes.io~projected/kube-api-access-pr995:{mountpoint:/var/lib/kubelet/pods/cf9f90f5-643f-41e8-a886-7d19fb064afc/volumes/kubernetes.io~projected/kube-api-access-pr995 major:0 minor:78 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d53c7e46-86e9-4328-9dfd-aec6deef5c01/volumes/kubernetes.io~projected/kube-api-access-wk9km:{mountpoint:/var/lib/kubelet/pods/d53c7e46-86e9-4328-9dfd-aec6deef5c01/volumes/kubernetes.io~projected/kube-api-access-wk9km major:0 minor:318 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~projected/kube-api-access major:0 minor:805 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~secret/serving-cert major:0 minor:804 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~projected/kube-api-access-mqhcp:{mountpoint:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~projected/kube-api-access-mqhcp major:0 minor:774 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c9 Mar 13 12:49:33.546927 master-0 kubenswrapper[19715]: 42edb/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~secret/cert major:0 minor:768 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:770 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~projected/kube-api-access-4m68d:{mountpoint:/var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~projected/kube-api-access-4m68d major:0 minor:874 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~secret/proxy-tls major:0 minor:873 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/ca-certs major:0 minor:422 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/kube-api-access-6hpcb:{mountpoint:/var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/kube-api-access-6hpcb major:0 minor:423 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv:{mountpoint:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv major:0 minor:245 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~projected/kube-api-access-64hl9:{mountpoint:/var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~projected/kube-api-access-64hl9 major:0 minor:321 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~secret/serving-cert major:0 minor:320 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph:{mountpoint:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f726d662-90e1-45b9-9bba-76a9c03faced/volumes/kubernetes.io~projected/kube-api-access-hflng:{mountpoint:/var/lib/kubelet/pods/f726d662-90e1-45b9-9bba-76a9c03faced/volumes/kubernetes.io~projected/kube-api-access-hflng major:0 minor:616 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n:{mountpoint:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n major:0 minor:246 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~secret/metrics-tls major:0 minor:535 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl:{mountpoint:/var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl major:0 minor:99 fsType:tmpfs blockSize:0} overlay_0-102:{mountpoint:/var/lib/containers/storage/overlay/a28fa2cc999e6ea267ed23f28dba3465fc9522d5cf6be0687960b641050df30e/merged major:0 minor:102 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/29ba9a740bc3f512656a313f274e7e1f23e91e2bb2d95f6a187ab3b7abfbde21/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-108:{mountpoint:/var/lib/containers/storage/overlay/16d96c773637708d6b7cab625fe0da3dd37225d87ec882aa4cc1a487fd0590df/merged major:0 minor:108 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/141105be1617efc4816eaf77f13fe738d1b9271f7e00bdc80c8cf17efa3c33fb/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/720484f392a715782d2708d46c97adaa08467ad64f53cc55e052b7a24e6c012b/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-118:{mountpoint:/var/lib/containers/storage/overlay/f5765f56aa095154a0a2b5ddc40e1772b126a5c4ad2f385f1d3193859e7f2077/merged major:0 minor:118 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/a28c1a43a8547db9a0970c5d40a83a9590954264934b31d8f1eb937718b2995c/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-126:{mountpoint:/var/lib/containers/storage/overlay/bc43f43227f34389d2225dd542635315c87bd2060e3b14a0bf159a8238fa938c/merged major:0 minor:126 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/b1e093b896e85aca9f6371b7751b9dee79cf60bbe371c27d459138d316d7e810/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/4a4cd543c084de798a40be1c0757a8d0ca430d4e811311f3751f22a8041977ea/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-138:{mountpoint:/var/lib/containers/storage/overlay/34792a753031d2349d674e51920fe61bdd9cd73077ef9e2228a625524025e99a/merged major:0 minor:138 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/c62f25ef2a9658370707b91e96aae9af71a8f442a0fb71f1df5598b041b7aa31/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/d0b4f1fafc8d674b2e3696136e08786964ff7f982afdbfbbe567935922adc564/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-163:{mountpoint:/var/lib/containers/storage/overlay/f497d38621e247f7e7045088dafbd570aadb90ae6e1815478b3f379224f56b04/merged major:0 minor:163 fsType:overlay blockSize:0} overlay_0-168:{mountpoint:/var/lib/containers/storage/overlay/f8ac89c418e5cd7b53ee530847fc36ac6d27de85738be9c29b422e56d63f8595/merged major:0 minor:168 fsType:overlay blockSize:0} overlay_0-170:{mountpoint:/var/lib/containers/storage/overlay/7d0042807484a593a0aee58be2989d5316b59ed7228c6159f9f9dfd985612e59/merged major:0 minor:170 fsType:overlay blockSize:0} overlay_0-172:{mountpoint:/var/lib/containers/storage/overlay/45da25be612c4f2c20d4c58f76749be983448743ebd14e3d70de9eca9a39cbb3/merged major:0 minor:172 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/fc8978852098c9a607d9c5942035fe5ac1e042470c7b64b1771ad0a6282e98cf/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/d0be7809054f0b60db4b4ca8174a6a091335cc62801e3e2660a6fcdb39f0fcd5/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/bdacbd9f67c1ae841356cc5505cfe7cabb61f23ddc9afd8f2426edb56dc1e16d/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/77c2ac9f214366281fedd6b4c93acb5d53850f9988f0ba9879ddbaa2ae47fccb/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-199:{mountpoint:/var/lib/containers/storage/overlay/f0473b891cd17d54078edc9714413874ed4ec5a8c54e81f18ecf072655f27187/merged major:0 minor:199 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/bae226474312c4f074428cabf7ca44ae701721671afabf5eaa90f1d615ade77b/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-274:{mountpoint:/var/lib/containers/storage/overlay/9bceed6807491d97f03d49ee87c3e80e13b76232d7fb414cd4a4186875a89a23/merged major:0 minor:274 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/f5c5507be6da33615108db90c7fd8d7d1b74cc3bc3ea00ec4477eec67e94d4a3/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/cafc4c9bccd073d6b92f98229988f17782bb3289ba17e5872026c9200cff879a/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/b8a34d525fc6d0eeafb6ac11156dd5fd2dad3e276f992b01f2a7f1f81a42f0cf/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/364b2e2dbdd1e2e410f7f2ea963c2b6abfacf34c8591527f04940dc9adfecb2e/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/9681aba4a3d724eeec30885cb8754b7ca9e603ccbb924ae3d8f8b7be6ca85241/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/f70103bfc1ccf98bf8c9717afb805dc2ee414e099d480d15a2bb29756c4d1d9e/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/1232804f7100b4c67264884b68e33f2a5ae4f93dcc131e7dccc108f0697cde12/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-291:{mountpoint:/var/lib/containers/storage/overlay/80643ba69c27e2091aafe5dbec882a5c922dfe53fc07f4be35a343f8e7899ab8/merged major:0 minor:291 fsType:overlay blockSize:0} overlay_0-293:{mountpoint:/var/lib/containers/storage/overlay/9485f212a05b901c03857187eb7abaff4c2b0ccea34f3aba96e71d73474052c0/merged major:0 minor:293 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/b371a8539515b4358a080def996b2442173ccd541ff829f84c61fae539c7cf28/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/af237823629abe3f0f5ebb9f3cb2fb8ba99eefeac234c5a15f9b44b3c42c68e5/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/3c95633d82688147b6541405cfd3822778b05a952d6b67c4cd194b43406cf964/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-306:{mountpoint:/var/lib/containers/storage/overlay/4096a7a18ca70a759222eec5bc5a2e1ddb23ec8318dd0966deaaf0a135ebdc56/merged major:0 minor:306 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/544b00001914134cf9a9aed5eb4f967087070d9b31d4aa03935bf7f88d2c5312/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/2fb2751dc8dc4175c30c3f0009bda122f255283eabbef96e8c379b4a3517651e/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/3cfd04c4032788d5478814422adcb590af54516b0b7e47b5b1faf982aa9a6481/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/c7097518d19f29d1cd70a3f916d1c4133afbe34b000b2397c9cbe95ce1bd792b/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-323:{mountpoint:/var/lib/containers/storage/overlay/ce1dbfe7c89e24eb5bac0019c2a80f1a168d682b861f1b7206348a08e3323a14/merged major:0 minor:323 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/9b05cb67f5f3a2e1030cada560ae9d4fcf57aaa294c13ff4254c1a4435da902c/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-334:{mountpoint:/var/lib/containers/storage/overlay/3d398d9edfab98353549d63f2f65eedd9c88e5ba36ec71adb63662f3d9cb0c36/merged major:0 minor:334 fsType:overlay blockSize:0} overlay_0-336:{mountpoint:/var/lib/containers/storage/overlay/7b157e52ec652986cba48b23a2ea9ec92d2096a7e56b98bde83e6578fe5d1c2a/merged major:0 minor:336 fsType:overlay blockSize:0} overlay_0-341:{mountpoint:/var/lib/containers/storage/overlay/7f79b66c72aee79e0e9128a6d064633c10a9e16329dbba5b4d71592c84e7e03e/merged major:0 minor:341 fsType:overlay blockSize:0} overlay_0-342:{mountpoint:/var/lib/containers/storage/overlay/a4a5f277fae23e0702df233d2b6394b320b2c4ad96d620d1fcd91a1a349cf2db/merged major:0 minor:342 fsType:overlay blockSize:0} overlay_0-351:{mountpoint:/var/lib/containers/storage/overlay/00df28f1339e13dbb48b9387f19044bc94e5f84a1d9eb6a745de62eb0ccca0f7/merged major:0 minor:351 fsType:overlay blockSize:0} overlay_0-352:{mountpoint:/var/lib/containers/storage/overlay/048694bf1f2e8bc5d759cff474eb1391a768175bba5b8cd29be2ea983a365739/merged major:0 minor:352 fsType:overlay blockSize:0} overlay_0-353:{mountpoint:/var/lib/containers/storage/overlay/6db68beed3acab6a0bec24a6a35d8899ad85cd894c72ffb8924805411c5aafca/merged major:0 minor:353 fsType:overlay blockSize:0} overlay_0-356:{mountpoint:/var/lib/containers/storage/overlay/08f1e837991e1d95d376e225c45450b78f259a627952fe60cf9753db16b7ef23/merged major:0 minor:356 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/81ed9a65e171c22f813cabdae5c4b8005f6bb3c7412e79ac5c65e872d71a8b08/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-375:{mountpoint:/var/lib/containers/storage/overlay/df3d6a6b5953a50dbcc84c2a39245aff0173062eb4aad8e4474b515eb883f9eb/merged major:0 minor:375 fsType:overlay blockSize:0} overlay_0-376:{mountpoint:/var/lib/containers/storage/overlay/63c5d9ecac5ea72dff55e871d59b4e2273ada522f2385ac14b9bb56a776d89bf/merged major:0 minor:376 fsType:overlay blockSize:0} overlay_0-378:{mountpoint:/var/lib/containers/storage/overlay/dc8ab6e7f00cd6b6349a0e9e2ed6f26c71e9fab40e372411b1473c6144fd038d/merged major:0 minor:378 fsType:overlay blockSize:0} overlay_0-380:{mountpoint:/var/lib/containers/storage/overlay/ab1e9ec95499a2c0d8c82f9647e57e0080f3c4968a4eeda52f27a9dd8105e67d/merged major:0 minor:380 fsType:overlay blockSize:0} overlay_0-383:{mountpoint:/var/lib/containers/storage/overlay/119df7fbf871a6fb9216399bbf560729e55e799941abad91b3729088d78db08c/merged major:0 minor:383 fsType:overlay blockSize:0} overlay_0-385:{mountpoint:/var/lib/containers/storage/overlay/5205b33ee7a8890eb40eee5b5c230e44b13c487ac4b0628e04c01307ea3ced2e/merged major:0 minor:385 fsType:overlay blockSize:0} overlay_0-386:{mountpoint:/var/lib/containers/storage/overlay/19bde935ea7f873fdc4c566110a13b0787d895795b35a279b72f19b48b414787/merged major:0 minor:386 fsType:overlay blockSize:0} overlay_0-390:{mountpoint:/var/lib/containers/storage/overlay/e543b7fa17c5c2ae568341eb76685e40a9162a92b16c6c900506a411184b401c/merged major:0 minor:390 fsType:overlay blockSize:0} overlay_0-393:{mountpoint:/var/lib/containers/storage/overlay/c4f39a8f05fccc01908d07f08437d9639e22fe27656eebab5c6b15066f8a0c0f/merged major:0 minor:393 fsType:overlay blockSize:0} overlay_0-397:{mountpoint:/var/lib/containers/storage/overlay/40cf5ebceba65774ad97b0be8e4b443e9363749ba4380d1de3670c94e30a239c/merged major:0 minor:397 fsType:overlay blockSize:0} overlay_0-400:{mountpoint:/var/lib/containers/storage/overlay/ba760a5803325d2cbcba6cf4ea70ba490cf0dbf74c4b8eaf703388553886cc1e/merged major:0 minor:400 fsType:overlay blockSize:0} overlay_0-406:{mountpoint:/var/lib/containers/storage/overlay/8dd98e9865af92be4e9b0d4c02e3cb46d449ead043e14940f80964bea4e5046f/merged major:0 minor:406 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/c8a655cc31b1b7ba6277f52aaefa6ae635b98837a94fe6f3610ccadc7f5b83ca/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-414:{mountpoint:/var/lib/containers/storage/overlay/680e50ee9dcae330fa7f5fee1f0f28a02c3159776311c8d284680c103d0f7aa1/merged major:0 minor:414 fsType:overlay blockSize:0} overlay_0-419:{mountpoint:/var/lib/containers/storage/overlay/eb3fae8c701027b9101db11835892122b89bd92d1d2cc1172f6cfa7d51059f41/merged major:0 minor:419 fsType:overlay blockSize:0} overlay_0-435:{mountpoint:/var/lib/containers/storage/overlay/7e3572323993d37eca13319519beef27260ce0dc6208dc4baf292cb0e7ec8199/merged major:0 minor:435 fsType:overlay blockSize:0} overlay_0-436:{mountpoint:/var/lib/containers/storage/overlay/369ee9a2cd64246c2987fcc22b301a8ea2c8550becd093c2594ca8fae19ce432/merged major:0 minor:436 fsType:overlay blockSize:0} overlay_0-437:{mountpoint:/var/lib/containers/storage/overlay/4753cab31a76cafaade030e26f7abd567f8d82da8d56025b5c0dc0609c46506f/merged major:0 minor:437 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/1f60b5eb3543db0983d47e8ad7234abde699edc46fae8c5ff009133ac93451b0/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-440:{mountpoint:/var/lib/containers/storage/overlay/8450fd7b418224ae1168dba78a37103eee6bfd61106330f097ba765c1d534873/merged major:0 minor:440 fsType:overlay blockSize:0} overlay_0-444:{mountpoint:/var/lib/containers/storage/overlay/00a4994460196f34dca7f691f4a9d0670a9bf4652ad1ecaf5aa3786ac2b9d9d7/merged major:0 minor:444 fsType:overlay blockSize:0} overlay_0-45:{mountpoint:/var/lib/containers/storage/overlay/87f58434821697b05b741ead6c15660b37752d5725a717c1002ab057da14b1b6/merged major:0 minor:45 fsType:overlay blockSize:0} overlay_0-470:{mountpoint:/var/lib/containers/storage/overlay/fbfc0461fe4e19a8d8b18da1a5f3e754af42169a9dc19eb43648161851ec910d/merged major:0 minor:470 fsType:overlay blockSize:0} overlay_0-471:{mountpoint:/var/lib/containers/storage/overlay/1a8d375a06fc0349ff08fbcb27823737f4b7a56b2eebdfa13bf29e3e42d0ff41/merged major:0 minor:471 fsType:overlay blockSize:0} overlay_0-479:{mountpoint:/var/lib/containers/storage/overlay/8e131b1fc47045b4181c7769e98b2b35c4776497d9a066a1eab1d6527572f8f3/merged major:0 minor:479 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/51147c177173dc1bf1a4aff6041fc0b8aae6171e7ade6f399a8985ac69622364/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-484:{mountpoint:/var/lib/containers/storage/overlay/5656ab14298f80650b4f34bb52206068c3a5ca45b5f2a7e0a650fc4ee70fc9ac/merged major:0 minor:484 fsType:overlay blockSize:0} overlay_0-486:{mountpoint:/var/lib/containers/storage/overlay/888d52b77d268958eafa54e120318ae999ef2107cb75bd2c4c991db638fe0c82/merged major:0 minor:486 fsType:overlay blockSize:0} overlay_0-504:{mountpoint:/var/lib/containers/storage/overlay/c800f7f53febbbbd4c6b2bd0e3900d4249df25e0263cdff4c5066630b2aad03a/merged major:0 minor:504 fsType:overlay blockSize:0} overlay_0-507:{mountpoint:/var/lib/containers/storage/overlay/6384fc6519e6471e13e5fa23ac7f73c9066992c38856c1cd4e85ac8b6286978b/merged major:0 minor:507 fsType:overlay blockSize:0} overlay_0-508:{mountpoint:/var/lib/containers/storage/overlay/568a95c13e55a769cd6d0fbb6dc5918e0fdebc30ecaab71422bb8cbdc646655e/merged major:0 minor:508 fsType:overlay blockSize:0} overlay_0-513:{mountpoint:/var/lib/containers/storage/overlay/128a97c88f91b54c1e45914c39abf8c93ca257d4150978b61dc4ce198c9721e9/merged major:0 minor:513 fsType:overlay blockSize:0} overlay_0-515:{mountpoint:/var/lib/containers/storage/overlay/db4e7f0ccc3d73921fc962666c10d17bcae2f77e2b69d1e050749f733aefa4c6/merged major:0 minor:515 fsType:overlay blockSize:0} overlay_0-516:{mountpoint:/var/lib/containers/storage/overlay/217bc0b5a79e5e52911e4f5f17ef1f19ff2c1c633335a3241f92b109972f60b4/merged major:0 minor:516 fsType:overlay blockSize:0} overlay_0-519:{mountpoint:/var/lib/containers/storage/overlay/660a00eec2c8bae6c6dc57509a2e0d344fb7fd2550f2d274115356144da94ffc/merged major:0 minor:519 fsType:overlay blockSize:0} overlay_0-521:{mountpoint:/var/lib/containers/storage/overlay/6c81f1f24917ab3507ad65fa1bc09619b8fdf2b295c5f9e0ce8f95a1bbe5822c/merged major:0 minor:521 fsType:overlay blockSize:0} overlay_0-548:{mountpoint:/var/lib/containers/storage/overlay/ba155602fecb3faed01d7635faefb4abb8d301ed3590ea1903df8fcc66988d02/merged major:0 minor:548 fsType:overlay blockSize:0} overlay_0-550:{mountpoint:/var/lib/containers/storage/overlay/a8eb132a5f8457c20543f1761af9236b2e0ddcaf7aa4f28c3caee2a9b3fab435/merged major:0 minor:550 fsType:overlay blockSize:0} overlay_0-552:{mountpoint:/var/lib/containers/storage/overlay/be872c90143678123324c07464e0a48d3433437ce537578a445adde460c2af5c/merged major:0 minor:552 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/52cac7253257e0cffe54750aa3ead2d71dbc36a2439b4fc2ef18d2335464aa12/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-558:{mountpoint:/var/lib/containers/storage/overlay/71b6001ba5279f013ff5c71710f7839ea43312079600936ab8b1cbdc313773ca/merged major:0 minor:558 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/90b131dcd256f10464f3cbe95bb1f8420eef5f195fb6828b2e4566a1c8c88055/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-566:{mountpoint:/var/lib/containers/storage/overlay/d433b797331139632548af3de7d09b4d0737c8017cbdad1ccb13ca7e096ef066/merged major:0 minor:566 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/43c1c607fec64bc206c9438a6756b0bcbb063f427f061ed5be0a06c350759201/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/1e5311d44b89e5209800ee4dd19dff3619fbe7481377665f3c1bb4bc4c5d82db/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/11ea667b8da90c10db4848e3c40ea98ea30f7afcc6db1f4cdf0a59b481f6b4ba/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-603:{mountpoint:/var/lib/containers/storage/overlay/742a9df2ce1fb24d224a8c6fc30d0b75866b53460de47ea10684d744ff143fee/merged major:0 minor:603 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/a4cb98b9b611655ef3ce62d966d7eb8252e2b92184aa9b770aa83a861f2195f0/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-614:{mountpoint:/var/lib/containers/storage/overlay/7628e2ae5bc7190aef221253a860b4198d721d25a53ae0f945970b9698352db2/merged major:0 minor:614 fsType:overlay blockSize:0} overlay_0-619:{mountpoint:/var/lib/containers/storage/overlay/33a155522c7c4e240165ef57a5a2b7b8b788fd2d445493eeb8fc9c097fc95740/merged major:0 minor:619 fsType:overlay blockSize:0} overlay_0-621:{mountpoint:/var/lib/containers/storage/overlay/4dcf5d84da62373f5f1d59fae02ed2cec412d9605c2c6037027ca3ba8d7cec06/merged major:0 minor:621 fsType:overlay blockSize:0} overlay_0-636:{mountpoint:/var/lib/containers/storage/overlay/f3990b456024719ccb4f471ee7347d095f09d27d0f16629779c80a62eedb5bd1/merged major:0 minor:636 fsType:overlay blockSize:0} overlay_0-639:{mountpoint:/var/lib/containers/storage/overlay/8cf8ffe4de0bd11b8f4e85648afc8b8d385adc4f167da24f1ec2d0b70f9b15a1/merged major:0 minor:639 fsType:overlay blockSize:0} overlay_0-642:{mountpoint:/var/lib/containers/storage/overlay/ae8b2de157504b45e28a771cba71f785e902bf146b6373e1bca1a332ac60a52d/merged major:0 minor:642 fsType:overlay blockSize:0} overlay_0-65:{mountpoint:/var/lib/containers/storage/overlay/f023c5a406ad46df470a4c197aeca332aeb5f9e1d61dc53c5306a4855329cd58/merged major:0 minor:65 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/654bce13ad77f03243e46ec3c1284dd7827ced5aa14c0bf9aeb20a4c8d2f39b5/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/0b545e455d41a0b0a50fe61eea8ae52c670811a34b86df15c07b96b353d8ffd5/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/748b0f8a890a2780500eaffc2ce3012acc49b7983611c82d6d37a8254bfdab95/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-671:{mountpoint:/var/lib/containers/storage/overlay/6cb439bd20616845ffd0be115da1f4bc6401f759b5c3b85e73838944214f8a41/merged major:0 minor:671 fsType:overlay blockSize:0} overlay_0-68:{mountpoint:/var/lib/containers/storage/overlay/a36f6ee7edf3771fd4d7eaf2e0ef78d9ad4aa9e94d9d13afbb6a295b92176034/merged major:0 minor:68 fsType:overlay blockSize:0} overlay_0-680:{mountpoint:/var/lib/containers/storage/overlay/013cbeac5cb3f7bd897b9cc2df1c2748751f47bfca3b5d00c52d5c6881e09d28/merged major:0 minor:680 fsType:overlay blockSize:0} overlay_0-684:{mountpoint:/var/lib/containers/storage/overlay/b7182fc323b6a61a85cfb05d3c3001942b88bcc53f2a66d8ca8d9d1771c8d42d/merged major:0 minor:684 fsType:overlay blockSize:0} overlay_0-686:{mountpoint:/var/lib/containers/storage/overlay/20dcd2a22e95f3840aef9e983949f25fa2c3fc95edf4404c41d02aa33b6f7917/merged major:0 minor:686 fsType:overlay blockSize:0} overlay_0-696:{mountpoint:/var/lib/containers/storage/overlay/b9e999fbfcbdc8b286cfb24cc20ac8c8e613cba2c5571ba76aa14db3c0b6c4a5/merged major:0 minor:696 fsType:overlay blockSize:0} overlay_0-706:{mountpoint:/var/lib/containers/storage/overlay/300796f8c6087800115286bdab9decc1e2eaf6a0b63ac75da00a041d7db37712/merged major:0 minor:706 fsType:overlay blockSize:0} overlay_0-71:{mountpoint:/var/lib/containers/storage/overlay/b8cef5b8ba725fadfc80f526a3524e55ff7e5bb2c5d78d7e96b1179b1e7c22ff/merged major:0 minor:71 fsType:overlay blockSize:0} overlay_0-716:{mountpoint:/var/lib/containers/storage/overlay/2ba812c31566a75939fa651f74752463ee6c3ea8fc99dabfce60a90f2f1f74e4/merged major:0 minor:716 fsType:overlay blockSize:0} overlay_0-718:{mountpoint:/var/lib/containers/storage/overlay/5e3ae0348d4e0e473f0f56c62b5740258ed6c7da49e33e906c4228f58789a2ff/merged major:0 minor:718 fsType:overlay blockSize:0} overlay_0-72:{mountpoint:/var/lib/containers/storage/overlay/ee8260563f5c63cdf49cfa37e67c455c7613eafdd17519de6ddc6ed8da430ab5/merged major:0 minor:72 fsType:overlay blockSize:0} overlay_0-720:{mountpoint:/var/lib/containers/storage/overlay/16865d0a3b4f290accb22fe1cc171eddff512563df7b9e8ff5f15bc3651b62c5/merged major:0 minor:720 fsType:overlay blockSize:0} overlay_0-736:{mountpoint:/var/lib/containers/storage/overlay/030895e81a5b5577f9168fd9b39be3e0a92d3c4ab08c231847261a7474e8e7a9/merged major:0 minor:736 fsType:overlay blockSize:0} overlay_0-748:{mountpoint:/var/lib/containers/storage/overlay/853fe38ec2b86b95a4f81617fef54120c2dccd5db4ef88880437e81ec80daf05/merged major:0 minor:748 fsType:overlay blockSize:0} overlay_0-76:{mountpoint:/var/lib/containers/storage/overlay/f2748e45dbe32c6104bcc09832ad911bf6669ac92872da31f7465ddacdce10c9/merged major:0 minor:76 fsType:overlay blockSize:0} overlay_0-777:{mountpoint:/var/lib/containers/storage/overlay/b14e2bc738e831007a57eb844fa7926ba9a2c3e9f63c77553232ec8aac4a13f2/merged major:0 minor:777 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/15deca500e8e9968be3d8afb86bc25c4fe626387d2fc3c753d94d2e444df136f/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-793:{mountpoint:/var/lib/containers/storage/overlay/040035a9c27c1c218daa334b0400904eaf313e9a3f970352e3739f2fe60402fd/merged major:0 minor:793 fsType:overlay blockSize:0} overlay_0-799:{mountpoint:/var/lib/containers/storage/overlay/a1462b7c96cfef3a6f738b3ca84d2126d8f399e7e662f21ccf6b8400a0168e6d/merged major:0 minor:799 fsType:overlay blockSize:0} overlay_0-802:{mountpoint:/var/lib/containers/storage/overlay/aaaffbe760f8b4c9f96a0c8e394ddfff608ff55793d288cd2de419e8b4171759/merged major:0 minor:802 fsType:overlay blockSize:0} overlay_0-809:{mountpoint:/var/lib/containers/storage/overlay/83a3b85e52157addeee44583bef86b6565d0bb1d3a07c6c62da9d6fdecda9deb/merged major:0 minor:809 fsType:overlay blockSize:0} overlay_0-814:{mountpoint:/var/lib/containers/storage/overlay/dd9e2cff7aacce837243ee27afbfec807b19aa8cc0f7948a2a8ff91da1eb975b/merged major:0 minor:814 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/717b4f43894454e4450aaad42dcdf971e9ff6122ed92447e037f998f7ea4b2ae/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-820:{mountpoint:/var/lib/containers/storage/overlay/215134034fab2ebab167738d7bc3c822f942d363bfe5a3288831ae6e74c02165/merged major:0 minor:820 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/d06629934bda5af07a6964bf201a7f23c9ef07c3ed53392ee5262d451831e93c/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-831:{mountpoint:/var/lib/containers/storage/overlay/6a98d5164260f481645b74822a5d2489634f8c52db77c1971a882e718a47dad3/merged major:0 minor:831 fsType:overlay blockSize:0} overlay_0-833:{mountpoint:/var/lib/containers/storage/overlay/e033193400cb915a11524c7aa91e59ac8dd47c7b3d106d7de5326632bb4aab82/merged major:0 minor:833 fsType:overlay blockSize:0} overlay_0-835:{mountpoint:/var/lib/containers/storage/overlay/159a19833943295f9346cf4c6c3974eeb31fcae33386120429a989bcd1491524/merged major:0 minor:835 fsType:overlay blockSize:0} overlay_0-841:{mountpoint:/var/lib/containers/storage/overlay/aa58d343d16c51ebce00188ecc08b94eb412d9d2cb90a9c8370c24e290d2f579/merged major:0 minor:841 fsType:overlay blockSize:0} overlay_0-843:{mountpoint:/var/lib/containers/storage/overlay/32f90e5d4d47b99269770bf9ff80e7ff95a51b77f426382bc413f21c272ec253/merged major:0 minor:843 fsType:overlay blockSize:0} overlay_0-845:{mountpoint:/var/lib/containers/storage/overlay/f515a337d5463a59a2ed0d5f57f22d9477e899901ddb364d46eed4c86e62709c/merged major:0 minor:845 fsType:overlay blockSize:0} overlay_0-847:{mountpoint:/var/lib/containers/storage/overlay/edefd698ab97e50dcc898da5a9426154af3fb18ef3c2e6d8c6f8e5a91912f636/merged major:0 minor:847 fsType:overlay blockSize:0} overlay_0-849:{mountpoint:/var/lib/containers/storage/overlay/533d25dfbbc77de1e5511203ba09f4816ae60020747f9d14f27dcd5251a3dd66/merged major:0 minor:849 fsType:overlay blockSize:0} overlay_0-852:{mountpoint:/var/lib/containers/storage/overlay/b86aac4c349a495a8e534d3e0097d08408fd238d6a856956c8ca0210079109dd/merged major:0 minor:852 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/c5ff5ac895f588ba1e017880a2679c05c18ddb2daffdea855a056e79c73bc69f/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-860:{mountpoint:/var/lib/containers/storage/overlay/8421962b8b4762119243428b08db618dd99aeb9e1c784c200fdeea4d350b5373/merged major:0 minor:860 fsType:overlay blockSize:0} overlay_0-879:{mountpoint:/var/lib/containers/storage/overlay/5c688310e69146142a109c23b75527d75ab674be61a0444a37c34c84fcb42f89/merged major:0 minor:879 fsType:overlay blockSize:0} overlay_0-881:{mountpoint:/var/lib/containers/storage/overlay/c67a780645f6cbd583c4ab392c15a2ca251d5386d4ca6fdc06cb4f35cdf95aab/merged major:0 minor:881 fsType:overlay blockSize:0} overlay_0-883:{mountpoint:/var/lib/containers/storage/overlay/9490293d5efacf615d1a444e5dda861644a54345fcb8dc51c6720c4230bb0071/merged major:0 minor:883 fsType:overlay blockSize:0} overlay_0-885:{mountpoint:/var/lib/containers/storage/overlay/fd24a8506657d997f391ccdd3c0e05807ef31f5f19122207230fc2e886d316a4/merged major:0 minor:885 fsType:overlay blockSize:0} overlay_0-889:{mountpoint:/var/lib/containers/storage/overlay/39c52b4a22dea8562aa9d608f95c6c58fc191c8d2fad7b8af6b67e59987eebaa/merged major:0 minor:889 fsType:overlay blockSize:0} overlay_0-900:{mountpoint:/var/lib/containers/storage/overlay/b824967d7964d7712d52a2bdc77d32de57e1dc248b73bbe633d2b5f593114be1/merged major:0 minor:900 fsType:overlay blockSize:0} overlay_0-904:{mountpoint:/var/lib/containers/storage/overlay/f40f1a6462bac458d9d74cd8d368efba6786342e707834fed2783a9c8f26f564/merged major:0 minor:904 fsType:overlay blockSize:0} overlay_0-929:{mountpoint:/var/lib/containers/storage/overlay/83073b67c827b101dfb453e8e356dad6bba47ee5d450adc852705a3cd0361bce/merged major:0 minor:929 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/96e04fe3f273bb240ac7d3dd1550933bbbbc340066b987e6d8f6535c5e1c9156/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/f3147658c6063dd432788c946296cee20f9b4f2d650fbb9aa45f99cc6e74d6de/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/9d5e9d2f87ac5a50b95fd2af81b0e8aeb86cacc7dc6073b925f912c857439829/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-95:{mountpoint:/var/lib/containers/storage/overlay/0b9009e3194df7a4594005ba3b7a75280b53cd571a05af0dfad34dee253b107a/merged major:0 minor:95 fsType:overlay blockSize:0} overlay_0-956:{mountpoint:/var/lib/containers/storage/overlay/2f82804f622c06bd9c021a5ac029752c24df86a99c02b9fcca5b09e918a428fa/merged major:0 minor:956 fsType:overlay blockSize:0} overlay_0-958:{mountpoint:/var/lib/containers/storage/overlay/5d45205e9f0dac33390ccb5d931ba6ead0bb3732842706cf8b4c9695ec0a14c5/merged major:0 minor:958 fsType:overlay blockSize:0} overlay_0-960:{mountpoint:/var/lib/containers/storage/overlay/2f043cd901b3100c4fbddb170e60a2591bf47425dd98a7acf919db8c334b4234/merged major:0 minor:960 fsType:overlay blockSize:0}] Mar 13 12:49:33.611456 master-0 kubenswrapper[19715]: I0313 12:49:33.610028 19715 manager.go:217] Machine: {Timestamp:2026-03-13 12:49:33.609114453 +0000 UTC m=+0.175787230 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:fe2021b5fe9941cbb2f9ca5654d6ac6f SystemUUID:fe2021b5-fe99-41cb-b2f9-ca5654d6ac6f BootID:1315907d-16f0-44fe-950e-68be880afcd6 Filesystems:[{Device:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:468 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:633 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-380 DeviceMajor:0 DeviceMinor:380 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-375 DeviceMajor:0 DeviceMinor:375 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-843 DeviceMajor:0 DeviceMinor:843 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-397 DeviceMajor:0 DeviceMinor:397 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/af3b95a05d0ae3229790032e0ff83bd0ae5924b5a61d802b485f5d4cc67a961c/userdata/shm DeviceMajor:0 DeviceMinor:401 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c376cfcc149f814093143297d444233d029091219b6838c537c7a5d68a679b01/userdata/shm DeviceMajor:0 DeviceMinor:346 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:228 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/kube-api-access-6hpcb DeviceMajor:0 DeviceMinor:423 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-614 DeviceMajor:0 DeviceMinor:614 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-558 DeviceMajor:0 DeviceMinor:558 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a8c0e7677e600788801fd2620471398efea77f43fbc90f3feb8d2a58a5b40162/userdata/shm DeviceMajor:0 DeviceMinor:174 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:805 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-168 DeviceMajor:0 DeviceMinor:168 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-889 DeviceMajor:0 DeviceMinor:889 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-820 DeviceMajor:0 DeviceMinor:820 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-860 DeviceMajor:0 DeviceMinor:860 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-639 DeviceMajor:0 DeviceMinor:639 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651/userdata/shm DeviceMajor:0 DeviceMinor:954 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:225 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739/userdata/shm DeviceMajor:0 DeviceMinor:77 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-385 DeviceMajor:0 DeviceMinor:385 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~projected/kube-api-access-lm4d2 DeviceMajor:0 DeviceMinor:758 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/824d5e18774211ffd65269e6c76a79cffc7294bc9b558c91abfddb9b02e76444/userdata/shm DeviceMajor:0 DeviceMinor:806 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/70c8b79e-4d29-4ae2-a24f-68595d942442/volumes/kubernetes.io~projected/kube-api-access-bk8kt DeviceMajor:0 DeviceMinor:303 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-552 DeviceMajor:0 DeviceMinor:552 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b833b4c44fc7671aadf2bbf7695850b67cef941ee23693e9e8acaa00999b3a13/userdata/shm DeviceMajor:0 DeviceMinor:655 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-108 DeviceMajor:0 DeviceMinor:108 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-199 DeviceMajor:0 DeviceMinor:199 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~projected/kube-api-access-qg7nx DeviceMajor:0 DeviceMinor:140 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/30c48665c9970605b1c6eec8cc08b81474d790e408c1dda1af4341df6b8abab1/userdata/shm DeviceMajor:0 DeviceMinor:249 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-291 DeviceMajor:0 DeviceMinor:291 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c5484d1b3c48429e30590a8c004d9563ea8ff1590e9912835b4e1fb40bb82de5/userdata/shm DeviceMajor:0 DeviceMinor:477 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-353 DeviceMajor:0 DeviceMinor:353 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~projected/kube-api-access-5jknp DeviceMajor:0 DeviceMinor:91 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-885 DeviceMajor:0 DeviceMinor:885 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ffcc3a23-d81c-4064-a24a-857dbe3222c8/volumes/kubernetes.io~projected/kube-api-access-b9nhl DeviceMajor:0 DeviceMinor:99 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-336 DeviceMajor:0 DeviceMinor:336 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-306 DeviceMajor:0 DeviceMinor:306 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bc244427-5e4e-441c-a04d-f93aeca9b7c1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:869 Capacity:200003584 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~projected/kube-api-access-4m68d DeviceMajor:0 DeviceMinor:874 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-929 DeviceMajor:0 DeviceMinor:929 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-71 DeviceMajor:0 DeviceMinor:71 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a52c7346de93add1d237d99f0d1a7027e99e77d0afd84eceb9bcc49809bf923e/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~projected/kube-api-access-x2jkn DeviceMajor:0 DeviceMinor:235 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d53c7e46-86e9-4328-9dfd-aec6deef5c01/volumes/kubernetes.io~projected/kube-api-access-wk9km DeviceMajor:0 DeviceMinor:318 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:461 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-696 DeviceMajor:0 DeviceMinor:696 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a31388bf3eb4be6295c3f302e94eade7f88980688dad331a6fb5026c223c9070/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~projected/kube-api-access-bkjph DeviceMajor:0 DeviceMinor:233 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/adad8c1ef5c4b589ed8b1cb34f6484ca79dbaffdd4f786714ba25a8f28ac7eaf/userdata/shm DeviceMajor:0 DeviceMinor:797 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72af96f2c7705b273fca5fc5d267412d3d3c7c9e170609cf42269c51f6355917/userdata/shm DeviceMajor:0 DeviceMinor:734 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-479 DeviceMajor:0 DeviceMinor:479 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-831 DeviceMajor:0 DeviceMinor:831 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-516 DeviceMajor:0 DeviceMinor:516 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-334 DeviceMajor:0 DeviceMinor:334 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-118 DeviceMajor:0 DeviceMinor:118 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/eda319d8-825a-4881-96a9-5386b87f8a4f/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:422 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:535 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-716 DeviceMajor:0 DeviceMinor:716 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-845 DeviceMajor:0 DeviceMinor:845 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-852 DeviceMajor:0 DeviceMinor:852 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-444 DeviceMajor:0 DeviceMinor:444 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-102 DeviceMajor:0 DeviceMinor:102 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-138 DeviceMajor:0 DeviceMinor:138 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f85ab8ab-f9f1-47ad-9c96-9498cef92474/volumes/kubernetes.io~projected/kube-api-access-sm25n DeviceMajor:0 DeviceMinor:246 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-550 DeviceMajor:0 DeviceMinor:550 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5623ea13-a34b-4510-8902-341912d115df/volumes/kubernetes.io~projected/kube-api-access-q9tpt DeviceMajor:0 DeviceMinor:743 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~projected/kube-api-access-gkcxc DeviceMajor:0 DeviceMinor:243 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~projected/kube-api-access-64hl9 DeviceMajor:0 DeviceMinor:321 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:761 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:775 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:234 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-736 DeviceMajor:0 DeviceMinor:736 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6970059f480dc091ae05c0c7c9205d04df86a1f3452392a79024b011c7f566dc/userdata/shm DeviceMajor:0 DeviceMinor:130 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-172 DeviceMajor:0 DeviceMinor:172 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-72 DeviceMajor:0 DeviceMinor:72 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-680 DeviceMajor:0 DeviceMinor:680 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-686 DeviceMajor:0 DeviceMinor:686 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:770 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-471 DeviceMajor:0 DeviceMinor:471 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-393 DeviceMajor:0 DeviceMinor:393 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-484 DeviceMajor:0 DeviceMinor:484 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/310bb063b58a9159851ef88dd90cde60bf53039832d7c07feba8d470bdfa8768/userdata/shm DeviceMajor:0 DeviceMinor:310 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:221 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/kube-api-access-c2dq8 DeviceMajor:0 DeviceMinor:226 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:768 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6d1a0616-4479-4621-b042-36a586bd8248/volumes/kubernetes.io~projected/kube-api-access-jn59j DeviceMajor:0 DeviceMinor:115 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b3ecbff0b1ffe2eac307dbf08badd582929ec9ff7e80f96a8ca7754f559637ea/userdata/shm DeviceMajor:0 DeviceMinor:256 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/01bcdd1bedab010174152427c2fc9fc5240d2b52c3bee410c42e480d89d6c0f8/userdata/shm DeviceMajor:0 DeviceMinor:787 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~projected/kube-api-access-27pbr DeviceMajor:0 DeviceMinor:229 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-383 DeviceMajor:0 DeviceMinor:383 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-470 DeviceMajor:0 DeviceMinor:470 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-809 DeviceMajor:0 DeviceMinor:809 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ccdf24fb12f7d902aeac298cfdb10afdab60e06015a73c1ef84d90c38418232b/userdata/shm DeviceMajor:0 DeviceMinor:473 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-883 DeviceMajor:0 DeviceMinor:883 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:949 Capacity:200003584 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~projected/kube-api-access-pqm5h DeviceMajor:0 DeviceMinor:232 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:426 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:427 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:472 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:755 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-900 DeviceMajor:0 DeviceMinor:900 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-521 DeviceMajor:0 DeviceMinor:521 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/684c9067-189a-4f50-ac8d-97111aa73d9c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-386 DeviceMajor:0 DeviceMinor:386 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:50 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~projected/kube-api-access-kvn5d DeviceMajor:0 DeviceMinor:778 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-835 DeviceMajor:0 DeviceMinor:835 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fea61f96ae5a58f1058d560f7a03de973bc0402e1a0675f1764951c0f4d6890e/userdata/shm DeviceMajor:0 DeviceMinor:265 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/18a8f8a3e194d3ca33fa06c6cb0a35721b606154a0b49ff431c90e0a47be8a6c/userdata/shm DeviceMajor:0 DeviceMinor:270 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:534 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~projected/kube-api-access-64w7v DeviceMajor:0 DeviceMinor:602 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-802 DeviceMajor:0 DeviceMinor:802 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-342 DeviceMajor:0 DeviceMinor:342 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-352 DeviceMajor:0 DeviceMinor:352 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c/volumes/kubernetes.io~projected/kube-api-access-cscxl DeviceMajor:0 DeviceMinor:759 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/91b49f3d1bef1ff2ffc876781ea51843f67335017ffa1e90ffc9330a2dc71785/userdata/shm DeviceMajor:0 DeviceMinor:165 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4a1a1ca2f1f627a9edd53099939af120013911bcf17806e1f6a21cd1517caec4/userdata/shm DeviceMajor:0 DeviceMinor:345 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/99e9d3fc7152ff7bfdbd97007d95913bd72cfac57cdb379fde935a1b0b89854a/userdata/shm DeviceMajor:0 DeviceMinor:795 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1addab03e0a43377bc42e7aa1ca7b3740d5d3b320dad8b09d9eff4da120413e0/userdata/shm DeviceMajor:0 DeviceMinor:546 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-566 DeviceMajor:0 DeviceMinor:566 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~projected/kube-api-access-rspzx DeviceMajor:0 DeviceMinor:230 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6a9184d-0557-4e61-bf31-6dd69c0dfb15/volumes/kubernetes.io~projected/kube-api-access-djchk DeviceMajor:0 DeviceMinor:634 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:231 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b8d42a515c20f0a163956eb8cf93dea5da1bfe49ebc70be65a7367110ca9d5ce/userdata/shm DeviceMajor:0 DeviceMinor:308 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4801a1906a7001eae337b963c9facf81446c4cb5eb428077e46f31714758e82d/userdata/shm DeviceMajor:0 DeviceMinor:536 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2a5976df-0366-47b3-bc54-1ba7c249e87c/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:51 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-684 DeviceMajor:0 DeviceMinor:684 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:533 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c50b66c08b64d0837766db36e00d9e48a3e7f90a13ec9264ea03f094b56406e2/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/e0763043-3813-43b6-9618-b2d15c942edb/volumes/kubernetes.io~projected/kube-api-access-mqhcp DeviceMajor:0 DeviceMinor:774 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b99b6a6f8847624f7d1b248d004e4f915acf70fd8eb923011f7483aa95bb9e70/userdata/shm DeviceMajor:0 DeviceMinor:790 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/dfda9ac962c72952dd338c0552968ea41c65cec9deb2da109d44fd46401c07be/userdata/shm DeviceMajor:0 DeviceMinor:943 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~projected/kube-api-access-vp6bn DeviceMajor:0 DeviceMinor:123 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-621 DeviceMajor:0 DeviceMinor:621 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-513 DeviceMajor:0 DeviceMinor:513 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~projected/kube-api-access-b2lvh DeviceMajor:0 DeviceMinor:153 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-65 DeviceMajor:0 DeviceMinor:65 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-849 DeviceMajor:0 DeviceMinor:849 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-378 DeviceMajor:0 DeviceMinor:378 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/66a62527c0e5db66e9872c3dd7560bdbc6ef268bc8ac034206fe2aa11b418af3/userdata/shm DeviceMajor:0 DeviceMinor:617 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:52 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1b2dea30812459a0f2e3cad7fc9f7d04a23de47d9995bf80f1829df8b09480d6/userdata/shm DeviceMajor:0 DeviceMinor:482 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:565 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-440 DeviceMajor:0 DeviceMinor:440 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/54c7efc1-6d89-4831-89d6-6f2812c36c36/volumes/kubernetes.io~projected/kube-api-access-qttkt DeviceMajor:0 DeviceMinor:242 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ef7730594563babb92c30139e5b185c02149726a1290cf94d92c26f164aa3181/userdata/shm DeviceMajor:0 DeviceMinor:261 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/59c9773d-7e88-4e30-9b8a-792a869a860e/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:64 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-163 DeviceMajor:0 DeviceMinor:163 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-748 DeviceMajor:0 DeviceMinor:748 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/volumes/kubernetes.io~projected/kube-api-access-cv745 DeviceMajor:0 DeviceMinor:776 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-958 DeviceMajor:0 DeviceMinor:958 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:481 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53/userdata/shm DeviceMajor:0 DeviceMinor:304 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1d9516f705e1b8698eb1f3dec329a0f76ba7bb5d655d5175432f90e826464bf9/userdata/shm DeviceMajor:0 DeviceMinor:268 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a7ea7f8a7c14a4770bc974d998f5bd5daace368d7b2428f8320ae10321a074ac/userdata/shm DeviceMajor:0 DeviceMinor:539 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cf67d16ae41f2d06685c25d23bb40014bd3ceb93a00f8755a0e1d4d5c6c424a3/userdata/shm DeviceMajor:0 DeviceMinor:789 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-881 DeviceMajor:0 DeviceMinor:881 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226/userdata/shm DeviceMajor:0 DeviceMinor:44 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-45 DeviceMajor:0 DeviceMinor:45 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/volumes/kubernetes.io~projected/kube-api-access-9n8sb DeviceMajor:0 DeviceMinor:241 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae713e76b592ab486e74396025cc6216796b64de06bdba6168c650a39735be09/userdata/shm DeviceMajor:0 DeviceMinor:248 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-720 DeviceMajor:0 DeviceMinor:720 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/kube-api-access-w97j5 DeviceMajor:0 DeviceMinor:420 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-508 DeviceMajor:0 DeviceMinor:508 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes/kubernetes.io~projected/kube-api-access-7s2cb DeviceMajor:0 DeviceMinor:224 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-414 DeviceMajor:0 DeviceMinor:414 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:531 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-68 DeviceMajor:0 DeviceMinor:68 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:754 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/142a3bdc9b5ff21edbbdecd123b72a85c46a9bbdc67183506baedeab4865493d/userdata/shm DeviceMajor:0 DeviceMinor:267 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/775453f0311a20f5a59ce1be5cefed7836882d9f13ee9dc3248617ae5895d787/userdata/shm DeviceMajor:0 DeviceMinor:272 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-435 DeviceMajor:0 DeviceMinor:435 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-548 DeviceMajor:0 DeviceMinor:548 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:607 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf580693-2931-4fef-adb5-b396f7303352/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:147 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b2ad4825-17fa-4ddd-b21e-334158f1c048/volumes/kubernetes.io~projected/kube-api-access-tnbf9 DeviceMajor:0 DeviceMinor:244 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-642 DeviceMajor:0 DeviceMinor:642 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/volumes/kubernetes.io~projected/kube-api-access-dg5p4 DeviceMajor:0 DeviceMinor:757 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~projected/kube-api-access-s7cgb DeviceMajor:0 DeviceMinor:760 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:129 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:772 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-799 DeviceMajor:0 DeviceMinor:799 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-833 DeviceMajor:0 DeviceMinor:833 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-519 DeviceMajor:0 DeviceMinor:519 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-406 DeviceMajor:0 DeviceMinor:406 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f485ea5123a1d0182412387178e57b07dfd142ef3af3f80ba71084ac36459bd/userdata/shm DeviceMajor:0 DeviceMinor:409 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1ad68c2d-762a-47ed-bd56-e823a83b9087/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-293 DeviceMajor:0 DeviceMinor:293 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-323 DeviceMajor:0 DeviceMinor:323 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-356 DeviceMajor:0 DeviceMinor:356 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-437 DeviceMajor:0 DeviceMinor:437 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-619 DeviceMajor:0 DeviceMinor:619 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80ceb0f9-67e4-4275-8532-85b6602367a2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:730 Capacity:200003584 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-960 DeviceMajor:0 DeviceMinor:960 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~projected/kube-api-access-2h5ht DeviceMajor:0 DeviceMinor:240 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/58be068b21a4eb91682595cd919b568f64a42b5eea6271ec682461e07a92c3ae/userdata/shm DeviceMajor:0 DeviceMinor:348 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-376 DeviceMajor:0 DeviceMinor:376 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:783 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-879 DeviceMajor:0 DeviceMinor:879 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ae06b35b34defd433d66d0dcfdcccb5e623a3353da2ccedea19406db7fe465d6/userdata/shm DeviceMajor:0 DeviceMinor:366 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-390 DeviceMajor:0 DeviceMinor:390 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6592aa5b-4a50-40f6-80a5-87e3c547018d/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:569 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d7a36fdd0b153d8fdb4540b3fcd458052672e0226aedc009e1ca191a106ed499/userdata/shm DeviceMajor:0 DeviceMinor:326 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d37419779a5a99d07a8431d2c7b74e48bacfbaba667a5ee5762a54d36c0f1cf1/userdata/shm DeviceMajor:0 DeviceMinor:612 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:209 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~projected/kube-api-access-4vqww DeviceMajor:0 DeviceMinor:416 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8226ffac-1f76-4eaa-ada5-056b5fd031b4/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:63 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/90c6474d-44a1-4164-a85b-6de0525dc656/volumes/kubernetes.io~projected/kube-api-access-wwjh6 DeviceMajor:0 DeviceMinor:92 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-847 DeviceMajor:0 DeviceMinor:847 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0/volumes/kubernetes.io~projected/kube-api-access-x27d2 DeviceMajor:0 DeviceMinor:125 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-95 DeviceMajor:0 DeviceMinor:95 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-515 DeviceMajor:0 DeviceMinor:515 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-504 DeviceMajor:0 DeviceMinor:504 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~projected/kube-api-access-6vg7m DeviceMajor:0 DeviceMinor:302 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/923bfd475b1f36c0aed9c9baa6b1e8120764cc5989d69bd8394f8af7e46356e0/userdata/shm DeviceMajor:0 DeviceMinor:766 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72c8417a873fd1b85ceced7f871125b403b5b588edc21a1d386d6970721625a8/userdata/shm DeviceMajor:0 DeviceMinor:784 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cce07379c81de2caa56b921b64dd3ee63be30f56bcec066d326de0a8f136d5b8/userdata/shm DeviceMajor:0 DeviceMinor:792 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-777 DeviceMajor:0 DeviceMinor:777 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/a8c840d1-8047-4ad6-a990-3ab119ae1cc5/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:418 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:532 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~projected/kube-api-access-tdcsm DeviceMajor:0 DeviceMinor:239 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-274 DeviceMajor:0 DeviceMinor:274 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-419 DeviceMajor:0 DeviceMinor:419 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-507 DeviceMajor:0 DeviceMinor:507 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:301 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e4e773c-d970-4f5e-9172-c1ebdb41888d/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:62 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bd48d4fa30aeda024af9d88b2a92ab9f3ad6a982cbd20ba4d8bca985b63c0b34/userdata/shm DeviceMajor:0 DeviceMinor:75 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/14eb83e7-c436-4f10-8cba-29e09a7036a8/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:769 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f726d662-90e1-45b9-9bba-76a9c03faced/volumes/kubernetes.io~projected/kube-api-access-hflng DeviceMajor:0 DeviceMinor:616 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d56fe854f57a86510068f43b63767127f8679659000c7763b64518661a2fe300/userdata/shm DeviceMajor:0 DeviceMinor:635 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0d868028-9984-472a-8403-ffed767e1bf8/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6d943705af2ecd94efc1b7b2e6e66854f8618298d38d9d6c5776dd66e931d3a/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-351 DeviceMajor:0 DeviceMinor:351 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0943b2db-9658-4a8d-89da-00779d55db6e/volumes/kubernetes.io~projected/kube-api-access-vgd4v DeviceMajor:0 DeviceMinor:428 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-603 DeviceMajor:0 DeviceMinor:603 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6e55908e-59f3-45a2-82aa-2616c5a2fd52/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8c72f4222c0466238ecef6497355ca369f8bfcd600621df230959caf510fb4c4/userdata/shm DeviceMajor:0 DeviceMinor:779 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/58581675-62f2-4564-9e12-bf34551b96ac/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:601 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/71b741d4-3899-4d31-afd1-72f5a9321f75/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:53 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dc1c9136-80e1-4736-8959-cc1e58aee26e/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:804 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-170 DeviceMajor:0 DeviceMinor:170 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2a74c2a-8376-4998-bdc6-02a978f1f568/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aca574d944d0c954b9a43d41c7decf56919de511e4613805cddc5cc602dee814/userdata/shm DeviceMajor:0 DeviceMinor:258 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/volumes/kubernetes.io~projected/kube-api-access-2894g DeviceMajor:0 DeviceMinor:765 Capacity:32475529216 Type:vfs Inode Mar 13 12:49:33.612075 master-0 kubenswrapper[19715]: s:4108170 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/e8d83309-58b2-40af-ab48-1f8b9aeffefb/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:873 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/8c09f25b9520e4dc26c1b765dcc792d2167ffd791c05400113eb463b237b8c15/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6/volumes/kubernetes.io~projected/kube-api-access-r9sfh DeviceMajor:0 DeviceMinor:236 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/20217cff-2f81-4a56-9c15-28385c19258c/volumes/kubernetes.io~projected/kube-api-access-nvprm DeviceMajor:0 DeviceMinor:238 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/16c2d774-967f-4964-ab4e-eb13c4364f63/volumes/kubernetes.io~projected/kube-api-access-bdvgq DeviceMajor:0 DeviceMinor:247 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/volumes/kubernetes.io~projected/kube-api-access-9wqpz DeviceMajor:0 DeviceMinor:404 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96/userdata/shm DeviceMajor:0 DeviceMinor:324 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cef5b900e1661977211454ffc9aaadd8fa1b91ab51948137171cbc32a2dba7c7/userdata/shm DeviceMajor:0 DeviceMinor:312 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c80b4d29df703d07a23db2b30b8fb506c55a2da67bacba3eebf13044aa056687/userdata/shm DeviceMajor:0 DeviceMinor:100 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/603fef71-e0cd-4617-bd8a-a55580578c2f/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-671 DeviceMajor:0 DeviceMinor:671 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-904 DeviceMajor:0 DeviceMinor:904 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-400 DeviceMajor:0 DeviceMinor:400 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-706 DeviceMajor:0 DeviceMinor:706 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/730e1f43-39b7-41de-ac81-270966725477/volumes/kubernetes.io~projected/kube-api-access-2vt8r DeviceMajor:0 DeviceMinor:733 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c1213b50-28bf-43ff-94c4-20616907735b/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:527 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/fcf34b9143a79db85809e953d50ec9054167443cbeec784e34d10ce0fb366cff/userdata/shm DeviceMajor:0 DeviceMinor:537 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-718 DeviceMajor:0 DeviceMinor:718 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7a24aff88a2b33793c90602bd0f46317c68b5e2becc49d106f2e8cd82fff29f4/userdata/shm DeviceMajor:0 DeviceMinor:746 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-814 DeviceMajor:0 DeviceMinor:814 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7574e950-de2e-4f90-99d0-eae3b45cd900/volumes/kubernetes.io~projected/kube-api-access-hpjj6 DeviceMajor:0 DeviceMinor:469 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1e9803a4-a166-42dc-9498-57e213602684/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:415 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-341 DeviceMajor:0 DeviceMinor:341 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-841 DeviceMajor:0 DeviceMinor:841 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-76 DeviceMajor:0 DeviceMinor:76 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:320 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/03758d96-5a20-4cba-92e0-47f5b1a3e558/volumes/kubernetes.io~projected/kube-api-access-55v4q DeviceMajor:0 DeviceMinor:771 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-793 DeviceMajor:0 DeviceMinor:793 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-126 DeviceMajor:0 DeviceMinor:126 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:237 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/31442e1e-3f42-4dba-82d5-08e5f8d29a58/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:756 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea/userdata/shm DeviceMajor:0 DeviceMinor:506 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-956 DeviceMajor:0 DeviceMinor:956 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-486 DeviceMajor:0 DeviceMinor:486 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/73dc5747-2d30-4a2d-a784-1dea1e10811d/volumes/kubernetes.io~projected/kube-api-access-9vsld DeviceMajor:0 DeviceMinor:227 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~projected/kube-api-access-992bv DeviceMajor:0 DeviceMinor:245 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2b5ab386-14ed-4610-a08a-54b6de877603/volumes/kubernetes.io~projected/kube-api-access-nqxjz DeviceMajor:0 DeviceMinor:260 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/1929440f-f2cc-450d-80ff-ded6788baa74/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/edde8919-104a-4f05-8e21-46787f706bed/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/volumes/kubernetes.io~projected/kube-api-access-qtpqk DeviceMajor:0 DeviceMinor:611 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d536b99e9f1c4d3aa396db896e6b1009ff8fdbe64376ba3de95876a07436f12a/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-636 DeviceMajor:0 DeviceMinor:636 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-436 DeviceMajor:0 DeviceMinor:436 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/cf9f90f5-643f-41e8-a886-7d19fb064afc/volumes/kubernetes.io~projected/kube-api-access-pr995 DeviceMajor:0 DeviceMinor:78 Capacity:32475529216 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d30550f78f634355f75a46b81834746cb5b11fa2ba553146cdee3bed2ae12ebf/userdata/shm DeviceMajor:0 DeviceMinor:764 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/31d28339b74a0d08ca9d705b4d13c84a3aaf85f1383fa6b578b10c51b3fe36e2/userdata/shm DeviceMajor:0 DeviceMinor:148 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bf47ad2a6c4b47eeb6f25e8817c53884dd3c9945b6828715576a49bc5541234a/userdata/shm DeviceMajor:0 DeviceMinor:252 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:01bcdd1bedab010 MacAddress:c2:9d:aa:53:a4:bc Speed:10000 Mtu:8900} {Name:142a3bdc9b5ff21 MacAddress:72:a1:3d:ec:a3:06 Speed:10000 Mtu:8900} {Name:18a8f8a3e194d3c MacAddress:d2:60:78:cb:35:cc Speed:10000 Mtu:8900} {Name:1addab03e0a4337 MacAddress:b2:26:d0:f7:15:52 Speed:10000 Mtu:8900} {Name:1b2dea30812459a MacAddress:ae:aa:b4:8a:9e:ff Speed:10000 Mtu:8900} {Name:1c5ece38636979d MacAddress:de:a7:d0:db:eb:c7 Speed:10000 Mtu:8900} {Name:1d9516f705e1b86 MacAddress:36:0d:b4:55:8e:48 Speed:10000 Mtu:8900} {Name:2f485ea5123a1d0 MacAddress:6a:07:43:8c:22:1e Speed:10000 Mtu:8900} {Name:30c48665c997060 MacAddress:b6:72:85:9a:ef:6c Speed:10000 Mtu:8900} {Name:310bb063b58a915 MacAddress:ba:be:59:f9:1f:e1 Speed:10000 Mtu:8900} {Name:4801a1906a7001e MacAddress:62:3d:e5:4d:09:e5 Speed:10000 Mtu:8900} {Name:4a1a1ca2f1f627a MacAddress:92:c1:0e:09:fe:06 Speed:10000 Mtu:8900} {Name:5868e4aaa495ba2 MacAddress:ae:3b:b9:9b:17:7e Speed:10000 Mtu:8900} {Name:58be068b21a4eb9 MacAddress:62:e0:90:49:67:e0 Speed:10000 Mtu:8900} {Name:72af96f2c7705b2 MacAddress:4a:2e:59:62:a8:e3 Speed:10000 Mtu:8900} {Name:72b959a542e46e3 MacAddress:02:79:90:7b:c9:28 Speed:10000 Mtu:8900} {Name:72c8417a873fd1b MacAddress:ae:af:03:f5:18:b4 Speed:10000 Mtu:8900} {Name:7a24aff88a2b337 MacAddress:12:08:b2:8a:18:2d Speed:10000 Mtu:8900} {Name:8c72f4222c04662 MacAddress:d6:b9:48:76:0c:90 Speed:10000 Mtu:8900} {Name:91b49f3d1bef1ff MacAddress:5e:d3:c8:46:a2:03 Speed:10000 Mtu:8900} {Name:923bfd475b1f36c MacAddress:be:12:6c:2f:cf:9e Speed:10000 Mtu:8900} {Name:99e9d3fc7152ff7 MacAddress:5e:e2:c5:2a:eb:b3 Speed:10000 Mtu:8900} {Name:a7ea7f8a7c14a47 MacAddress:6e:0c:1b:ac:ce:ba Speed:10000 Mtu:8900} {Name:a8c0e7677e60078 MacAddress:2a:69:a1:fd:4b:a4 Speed:10000 Mtu:8900} {Name:aca574d944d0c95 MacAddress:2a:e7:67:30:3c:4b Speed:10000 Mtu:8900} {Name:adad8c1ef5c4b58 MacAddress:9a:d7:75:72:d4:54 Speed:10000 Mtu:8900} {Name:ae06b35b34defd4 MacAddress:66:74:f3:2e:e2:27 Speed:10000 Mtu:8900} {Name:ae713e76b592ab4 MacAddress:86:a1:b0:38:e9:eb Speed:10000 Mtu:8900} {Name:af3b95a05d0ae32 MacAddress:6e:81:79:11:80:65 Speed:10000 Mtu:8900} {Name:b3ecbff0b1ffe2e MacAddress:7a:e2:cb:02:e9:31 Speed:10000 Mtu:8900} {Name:b833b4c44fc7671 MacAddress:52:bf:a7:43:82:51 Speed:10000 Mtu:8900} {Name:b8d42a515c20f0a MacAddress:72:1e:56:ed:df:43 Speed:10000 Mtu:8900} {Name:b99b6a6f8847624 MacAddress:12:d3:3e:9a:ac:7c Speed:10000 Mtu:8900} {Name:bd48d4fa30aeda0 MacAddress:d2:2f:29:fe:64:e7 Speed:10000 Mtu:8900} {Name:bf47ad2a6c4b47e MacAddress:c2:f2:1d:53:39:e8 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:e2:2f:c5:ea:9f:67 Speed:0 Mtu:8900} {Name:c2463d59212cd94 MacAddress:b6:36:b4:19:11:d8 Speed:10000 Mtu:8900} {Name:c376cfcc149f814 MacAddress:4e:3b:24:ce:3e:c9 Speed:10000 Mtu:8900} {Name:c5484d1b3c48429 MacAddress:e6:be:72:17:e9:1e Speed:10000 Mtu:8900} {Name:cce07379c81de2c MacAddress:f6:d5:f6:73:70:27 Speed:10000 Mtu:8900} {Name:cef5b900e166197 MacAddress:8e:6a:40:56:ae:a4 Speed:10000 Mtu:8900} {Name:cf67d16ae41f2d0 MacAddress:ea:4f:6a:ac:e8:c5 Speed:10000 Mtu:8900} {Name:d30550f78f63435 MacAddress:ca:44:c1:ea:dc:03 Speed:10000 Mtu:8900} {Name:d37419779a5a99d MacAddress:be:65:2e:92:0f:12 Speed:10000 Mtu:8900} {Name:d536b99e9f1c4d3 MacAddress:72:31:af:24:3d:79 Speed:10000 Mtu:8900} {Name:d56fe854f57a865 MacAddress:4e:32:d9:82:ca:59 Speed:10000 Mtu:8900} {Name:d7a36fdd0b153d8 MacAddress:aa:2c:e6:92:3f:f2 Speed:10000 Mtu:8900} {Name:dfda9ac962c7295 MacAddress:9a:5a:b6:a2:49:39 Speed:10000 Mtu:8900} {Name:e6d943705af2ecd MacAddress:da:fb:e7:9b:2f:3a Speed:10000 Mtu:8900} {Name:ef7730594563bab MacAddress:5a:b7:71:29:81:e8 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:fc:21:de Speed:-1 Mtu:9000} {Name:fcf34b9143a79db MacAddress:a6:53:24:db:f8:05 Speed:10000 Mtu:8900} {Name:fea61f96ae5a58f MacAddress:d2:b0:03:ea:b8:ce Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:f6:fc:d3:7e:3d:76 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 12:49:33.612075 master-0 kubenswrapper[19715]: I0313 12:49:33.611473 19715 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 12:49:33.612075 master-0 kubenswrapper[19715]: I0313 12:49:33.611569 19715 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 12:49:33.612075 master-0 kubenswrapper[19715]: I0313 12:49:33.612079 19715 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 12:49:33.612363 master-0 kubenswrapper[19715]: I0313 12:49:33.612308 19715 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 12:49:33.612762 master-0 kubenswrapper[19715]: I0313 12:49:33.612355 19715 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 12:49:33.612984 master-0 kubenswrapper[19715]: I0313 12:49:33.612804 19715 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 12:49:33.612984 master-0 kubenswrapper[19715]: I0313 12:49:33.612822 19715 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 12:49:33.612984 master-0 kubenswrapper[19715]: I0313 12:49:33.612845 19715 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:49:33.612984 master-0 kubenswrapper[19715]: I0313 12:49:33.612918 19715 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 12:49:33.613238 master-0 kubenswrapper[19715]: I0313 12:49:33.613021 19715 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:49:33.613238 master-0 kubenswrapper[19715]: I0313 12:49:33.613144 19715 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 12:49:33.613735 master-0 kubenswrapper[19715]: I0313 12:49:33.613314 19715 kubelet.go:418] "Attempting to sync node with API server" Mar 13 12:49:33.613735 master-0 kubenswrapper[19715]: I0313 12:49:33.613343 19715 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 12:49:33.613735 master-0 kubenswrapper[19715]: I0313 12:49:33.613412 19715 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 12:49:33.613735 master-0 kubenswrapper[19715]: I0313 12:49:33.613431 19715 kubelet.go:324] "Adding apiserver pod source" Mar 13 12:49:33.616432 master-0 kubenswrapper[19715]: W0313 12:49:33.616337 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:33.616562 master-0 kubenswrapper[19715]: E0313 12:49:33.616473 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:33.622840 master-0 kubenswrapper[19715]: I0313 12:49:33.622801 19715 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 12:49:33.625253 master-0 kubenswrapper[19715]: I0313 12:49:33.625120 19715 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 12:49:33.626300 master-0 kubenswrapper[19715]: W0313 12:49:33.626200 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:33.626388 master-0 kubenswrapper[19715]: E0313 12:49:33.626334 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:33.626388 master-0 kubenswrapper[19715]: I0313 12:49:33.626273 19715 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 12:49:33.627060 master-0 kubenswrapper[19715]: I0313 12:49:33.626926 19715 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627214 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627233 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627241 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627253 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627260 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627287 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627295 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627302 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627310 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627318 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 12:49:33.627319 master-0 kubenswrapper[19715]: I0313 12:49:33.627328 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 12:49:33.627967 master-0 kubenswrapper[19715]: I0313 12:49:33.627345 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 12:49:33.627967 master-0 kubenswrapper[19715]: I0313 12:49:33.627387 19715 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 12:49:33.628157 master-0 kubenswrapper[19715]: I0313 12:49:33.628056 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:33.628254 master-0 kubenswrapper[19715]: I0313 12:49:33.628236 19715 server.go:1280] "Started kubelet" Mar 13 12:49:33.629159 master-0 kubenswrapper[19715]: I0313 12:49:33.629102 19715 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 12:49:33.629294 master-0 kubenswrapper[19715]: I0313 12:49:33.629261 19715 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 12:49:33.629344 master-0 kubenswrapper[19715]: I0313 12:49:33.629322 19715 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 12:49:33.630016 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 12:49:33.630357 master-0 kubenswrapper[19715]: I0313 12:49:33.630099 19715 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 12:49:33.632641 master-0 kubenswrapper[19715]: I0313 12:49:33.632542 19715 server.go:449] "Adding debug handlers to kubelet server" Mar 13 12:49:33.635225 master-0 kubenswrapper[19715]: E0313 12:49:33.635016 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c6784dd974c8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:49:33.62815707 +0000 UTC m=+0.194829827,LastTimestamp:2026-03-13 12:49:33.62815707 +0000 UTC m=+0.194829827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:49:33.642177 master-0 kubenswrapper[19715]: I0313 12:49:33.642042 19715 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 12:49:33.642488 master-0 kubenswrapper[19715]: I0313 12:49:33.642424 19715 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 12:27:50 +0000 UTC, rotation deadline is 2026-03-14 09:52:31.369601077 +0000 UTC Mar 13 12:49:33.642488 master-0 kubenswrapper[19715]: I0313 12:49:33.642475 19715 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 21h2m57.727128458s for next certificate rotation Mar 13 12:49:33.642779 master-0 kubenswrapper[19715]: I0313 12:49:33.642744 19715 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 12:49:33.643172 master-0 kubenswrapper[19715]: I0313 12:49:33.643134 19715 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 12:49:33.643172 master-0 kubenswrapper[19715]: I0313 12:49:33.643155 19715 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 12:49:33.643403 master-0 kubenswrapper[19715]: I0313 12:49:33.643372 19715 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 12:49:33.645623 master-0 kubenswrapper[19715]: E0313 12:49:33.644275 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:33.645623 master-0 kubenswrapper[19715]: W0313 12:49:33.645453 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:33.645623 master-0 kubenswrapper[19715]: E0313 12:49:33.645551 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:33.645958 master-0 kubenswrapper[19715]: I0313 12:49:33.645926 19715 factory.go:153] Registering CRI-O factory Mar 13 12:49:33.646040 master-0 kubenswrapper[19715]: I0313 12:49:33.645969 19715 factory.go:221] Registration of the crio container factory successfully Mar 13 12:49:33.646083 master-0 kubenswrapper[19715]: I0313 12:49:33.646070 19715 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 12:49:33.646125 master-0 kubenswrapper[19715]: I0313 12:49:33.646086 19715 factory.go:55] Registering systemd factory Mar 13 12:49:33.646125 master-0 kubenswrapper[19715]: I0313 12:49:33.646094 19715 factory.go:221] Registration of the systemd container factory successfully Mar 13 12:49:33.646233 master-0 kubenswrapper[19715]: I0313 12:49:33.646142 19715 factory.go:103] Registering Raw factory Mar 13 12:49:33.646233 master-0 kubenswrapper[19715]: I0313 12:49:33.646167 19715 manager.go:1196] Started watching for new ooms in manager Mar 13 12:49:33.647039 master-0 kubenswrapper[19715]: I0313 12:49:33.646975 19715 manager.go:319] Starting recovery of all containers Mar 13 12:49:33.663987 master-0 kubenswrapper[19715]: E0313 12:49:33.659941 19715 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 12:49:33.669867 master-0 kubenswrapper[19715]: E0313 12:49:33.669796 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676369 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5623ea13-a34b-4510-8902-341912d115df" volumeName="kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676487 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676504 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71b741d4-3899-4d31-afd1-72f5a9321f75" volumeName="kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676521 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0763043-3813-43b6-9618-b2d15c942edb" volumeName="kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676534 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eda319d8-825a-4881-96a9-5386b87f8a4f" volumeName="kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676547 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676559 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" volumeName="kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.676595 master-0 kubenswrapper[19715]: I0313 12:49:33.676610 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e9803a4-a166-42dc-9498-57e213602684" volumeName="kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676632 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" volumeName="kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676652 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03758d96-5a20-4cba-92e0-47f5b1a3e558" volumeName="kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676855 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676869 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a6a45be0-19ef-4d36-b8a7-eb2705d24bfa" volumeName="kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676886 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" volumeName="kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676943 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676957 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" volumeName="kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676976 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.676993 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677006 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f942fce-07a9-4377-8330-c6249a5a8b24" volumeName="kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677031 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677043 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677060 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" volumeName="kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677071 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" volumeName="kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677093 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14eb83e7-c436-4f10-8cba-29e09a7036a8" volumeName="kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677105 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf9f90f5-643f-41e8-a886-7d19fb064afc" volumeName="kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677133 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0763043-3813-43b6-9618-b2d15c942edb" volumeName="kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677148 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e9803a4-a166-42dc-9498-57e213602684" volumeName="kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677164 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31442e1e-3f42-4dba-82d5-08e5f8d29a58" volumeName="kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677180 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74fa8c05-2d64-4307-9fe3-1d3d69a5aa75" volumeName="kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677193 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5623ea13-a34b-4510-8902-341912d115df" volumeName="kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677204 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677247 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c9773d-7e88-4e30-9b8a-792a869a860e" volumeName="kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677262 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae6e46f-a465-46e6-bc27-d13fc6f90d8c" volumeName="kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677275 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ae6e46f-a465-46e6-bc27-d13fc6f90d8c" volumeName="kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677293 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" volumeName="kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677308 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14eb83e7-c436-4f10-8cba-29e09a7036a8" volumeName="kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677320 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677338 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677351 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677365 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="70c8b79e-4d29-4ae2-a24f-68595d942442" volumeName="kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677379 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7343df96-cba2-477b-8a1b-7af369620440" volumeName="kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677391 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5" volumeName="kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk" seLinuxMountContext="" Mar 13 12:49:33.677362 master-0 kubenswrapper[19715]: I0313 12:49:33.677402 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="730e1f43-39b7-41de-ac81-270966725477" volumeName="kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677414 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7343df96-cba2-477b-8a1b-7af369620440" volumeName="kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677429 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74fa8c05-2d64-4307-9fe3-1d3d69a5aa75" volumeName="kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677441 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8226ffac-1f76-4eaa-ada5-056b5fd031b4" volumeName="kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677478 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" volumeName="kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677491 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677503 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1e9803a4-a166-42dc-9498-57e213602684" volumeName="kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677519 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b5ab386-14ed-4610-a08a-54b6de877603" volumeName="kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677533 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" volumeName="kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677545 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677590 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b2ad4825-17fa-4ddd-b21e-334158f1c048" volumeName="kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677612 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677628 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2b5ab386-14ed-4610-a08a-54b6de877603" volumeName="kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677876 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90c6474d-44a1-4164-a85b-6de0525dc656" volumeName="kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677894 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" volumeName="kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677915 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e4e773c-d970-4f5e-9172-c1ebdb41888d" volumeName="kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677930 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677943 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" volumeName="kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677959 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677973 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20217cff-2f81-4a56-9c15-28385c19258c" volumeName="kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.677987 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678022 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678041 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5" volumeName="kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678056 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678076 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678089 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6592aa5b-4a50-40f6-80a5-87e3c547018d" volumeName="kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678101 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678121 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678135 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e4e773c-d970-4f5e-9172-c1ebdb41888d" volumeName="kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678150 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85ab8ab-f9f1-47ad-9c96-9498cef92474" volumeName="kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678164 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678178 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678191 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="59c9773d-7e88-4e30-9b8a-792a869a860e" volumeName="kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678213 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678229 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ffcc3a23-d81c-4064-a24a-857dbe3222c8" volumeName="kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678241 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4f942fce-07a9-4377-8330-c6249a5a8b24" volumeName="kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678256 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678280 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf9f90f5-643f-41e8-a886-7d19fb064afc" volumeName="kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678304 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678324 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678336 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eda319d8-825a-4881-96a9-5386b87f8a4f" volumeName="kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678348 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="730e1f43-39b7-41de-ac81-270966725477" volumeName="kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678361 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0763043-3813-43b6-9618-b2d15c942edb" volumeName="kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678373 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678386 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678402 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678417 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678432 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e4e773c-d970-4f5e-9172-c1ebdb41888d" volumeName="kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678443 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" volumeName="kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678456 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f726d662-90e1-45b9-9bba-76a9c03faced" volumeName="kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678469 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678482 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7343df96-cba2-477b-8a1b-7af369620440" volumeName="kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678496 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90c6474d-44a1-4164-a85b-6de0525dc656" volumeName="kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678508 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678520 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678534 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" volumeName="kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678545 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678558 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678661 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678682 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71b741d4-3899-4d31-afd1-72f5a9321f75" volumeName="kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678709 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" volumeName="kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678729 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678743 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678766 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678790 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678810 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03758d96-5a20-4cba-92e0-47f5b1a3e558" volumeName="kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678832 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678881 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678901 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678920 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0763043-3813-43b6-9618-b2d15c942edb" volumeName="kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678935 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" volumeName="kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678951 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71b741d4-3899-4d31-afd1-72f5a9321f75" volumeName="kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678974 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80ceb0f9-67e4-4275-8532-85b6602367a2" volumeName="kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678986 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5" volumeName="kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.678998 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d53c7e46-86e9-4328-9dfd-aec6deef5c01" volumeName="kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679011 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d83309-58b2-40af-ab48-1f8b9aeffefb" volumeName="kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679025 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03758d96-5a20-4cba-92e0-47f5b1a3e558" volumeName="kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679037 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14eb83e7-c436-4f10-8cba-29e09a7036a8" volumeName="kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679050 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679063 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679075 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679088 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" volumeName="kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679102 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" volumeName="kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679113 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e0763043-3813-43b6-9618-b2d15c942edb" volumeName="kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679126 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="03758d96-5a20-4cba-92e0-47f5b1a3e558" volumeName="kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.679095 master-0 kubenswrapper[19715]: I0313 12:49:33.679136 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" volumeName="kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679567 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc244427-5e4e-441c-a04d-f93aeca9b7c1" volumeName="kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679610 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679624 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0d868028-9984-472a-8403-ffed767e1bf8" volumeName="kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679636 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" volumeName="kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679649 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="54c7efc1-6d89-4831-89d6-6f2812c36c36" volumeName="kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679662 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58581675-62f2-4564-9e12-bf34551b96ac" volumeName="kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679675 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58581675-62f2-4564-9e12-bf34551b96ac" volumeName="kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679688 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6592aa5b-4a50-40f6-80a5-87e3c547018d" volumeName="kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679699 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="730e1f43-39b7-41de-ac81-270966725477" volumeName="kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679710 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7343df96-cba2-477b-8a1b-7af369620440" volumeName="kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679721 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a5976df-0366-47b3-bc54-1ba7c249e87c" volumeName="kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679732 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2a5976df-0366-47b3-bc54-1ba7c249e87c" volumeName="kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679743 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5623ea13-a34b-4510-8902-341912d115df" volumeName="kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679758 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679769 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" volumeName="kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679780 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679791 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7343df96-cba2-477b-8a1b-7af369620440" volumeName="kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679801 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d83309-58b2-40af-ab48-1f8b9aeffefb" volumeName="kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679813 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679881 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679893 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679904 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="603fef71-e0cd-4617-bd8a-a55580578c2f" volumeName="kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679917 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679928 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf9f90f5-643f-41e8-a886-7d19fb064afc" volumeName="kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679942 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="14eb83e7-c436-4f10-8cba-29e09a7036a8" volumeName="kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679954 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="58581675-62f2-4564-9e12-bf34551b96ac" volumeName="kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679966 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="73dc5747-2d30-4a2d-a784-1dea1e10811d" volumeName="kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679978 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90c6474d-44a1-4164-a85b-6de0525dc656" volumeName="kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.679990 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" volumeName="kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680002 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e8d83309-58b2-40af-ab48-1f8b9aeffefb" volumeName="kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680015 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="eda319d8-825a-4881-96a9-5386b87f8a4f" volumeName="kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680028 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680040 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680082 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680097 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680110 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680122 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8226ffac-1f76-4eaa-ada5-056b5fd031b4" volumeName="kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680136 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cf580693-2931-4fef-adb5-b396f7303352" volumeName="kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680149 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc1c9136-80e1-4736-8959-cc1e58aee26e" volumeName="kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680164 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc1c9136-80e1-4736-8959-cc1e58aee26e" volumeName="kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680176 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1929440f-f2cc-450d-80ff-ded6788baa74" volumeName="kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680200 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31442e1e-3f42-4dba-82d5-08e5f8d29a58" volumeName="kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680213 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" volumeName="kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680225 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc1c9136-80e1-4736-8959-cc1e58aee26e" volumeName="kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680237 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20217cff-2f81-4a56-9c15-28385c19258c" volumeName="kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680249 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31442e1e-3f42-4dba-82d5-08e5f8d29a58" volumeName="kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680261 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" volumeName="kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680273 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680285 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680297 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0943b2db-9658-4a8d-89da-00779d55db6e" volumeName="kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680309 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16c2d774-967f-4964-ab4e-eb13c4364f63" volumeName="kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680321 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1ad68c2d-762a-47ed-bd56-e823a83b9087" volumeName="kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680333 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680345 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="edde8919-104a-4f05-8e21-46787f706bed" volumeName="kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680358 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6592aa5b-4a50-40f6-80a5-87e3c547018d" volumeName="kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680377 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6d1a0616-4479-4621-b042-36a586bd8248" volumeName="kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680399 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680413 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c1213b50-28bf-43ff-94c4-20616907735b" volumeName="kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680429 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2a74c2a-8376-4998-bdc6-02a978f1f568" volumeName="kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680441 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" volumeName="kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680453 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="684c9067-189a-4f50-ac8d-97111aa73d9c" volumeName="kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680466 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e55908e-59f3-45a2-82aa-2616c5a2fd52" volumeName="kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680477 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f85ab8ab-f9f1-47ad-9c96-9498cef92474" volumeName="kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680499 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" volumeName="kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680514 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7574e950-de2e-4f90-99d0-eae3b45cd900" volumeName="kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680540 19715 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="90c6474d-44a1-4164-a85b-6de0525dc656" volumeName="kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6" seLinuxMountContext="" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680553 19715 reconstruct.go:97] "Volume reconstruction finished" Mar 13 12:49:33.682193 master-0 kubenswrapper[19715]: I0313 12:49:33.680564 19715 reconciler.go:26] "Reconciler: start to sync state" Mar 13 12:49:33.692872 master-0 kubenswrapper[19715]: I0313 12:49:33.692731 19715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 12:49:33.694935 master-0 kubenswrapper[19715]: I0313 12:49:33.694900 19715 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 12:49:33.695260 master-0 kubenswrapper[19715]: I0313 12:49:33.695234 19715 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 12:49:33.695332 master-0 kubenswrapper[19715]: I0313 12:49:33.695277 19715 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 12:49:33.695381 master-0 kubenswrapper[19715]: E0313 12:49:33.695339 19715 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 12:49:33.696529 master-0 kubenswrapper[19715]: W0313 12:49:33.696451 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:33.696709 master-0 kubenswrapper[19715]: E0313 12:49:33.696572 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:33.705217 master-0 kubenswrapper[19715]: I0313 12:49:33.705150 19715 generic.go:334] "Generic (PLEG): container finished" podID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerID="eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c" exitCode=0 Mar 13 12:49:33.708971 master-0 kubenswrapper[19715]: I0313 12:49:33.708926 19715 generic.go:334] "Generic (PLEG): container finished" podID="b2ad4825-17fa-4ddd-b21e-334158f1c048" containerID="a9dd7732800ec2cf2ba2657ee89d490d35d4ed3ca8ea35ffd325cd650a57aa03" exitCode=0 Mar 13 12:49:33.712527 master-0 kubenswrapper[19715]: I0313 12:49:33.712473 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_aae10aa9-9c7d-4319-9829-e900af7df301/installer/0.log" Mar 13 12:49:33.712642 master-0 kubenswrapper[19715]: I0313 12:49:33.712569 19715 generic.go:334] "Generic (PLEG): container finished" podID="aae10aa9-9c7d-4319-9829-e900af7df301" containerID="4bc1f7c933d28f40b13d28985334ae240170d114b669b057cc93fee9fb9f7a73" exitCode=1 Mar 13 12:49:33.721250 master-0 kubenswrapper[19715]: I0313 12:49:33.721183 19715 generic.go:334] "Generic (PLEG): container finished" podID="684c9067-189a-4f50-ac8d-97111aa73d9c" containerID="710eb299157e1ef547583f7fd20b397c92fa5af65696f69dc8c6e3ebffa2ae8b" exitCode=0 Mar 13 12:49:33.728030 master-0 kubenswrapper[19715]: I0313 12:49:33.727981 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/3.log" Mar 13 12:49:33.728759 master-0 kubenswrapper[19715]: I0313 12:49:33.728552 19715 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="05480ecb7de81ac5be34ed4f520482654182603ba660d11e5c077049c5fcab31" exitCode=255 Mar 13 12:49:33.728759 master-0 kubenswrapper[19715]: I0313 12:49:33.728614 19715 generic.go:334] "Generic (PLEG): container finished" podID="edde8919-104a-4f05-8e21-46787f706bed" containerID="9cb3e3949a1bb640329a4953a85d4530ae11d656b3ce5bea3323fa6af6e8d03b" exitCode=0 Mar 13 12:49:33.731480 master-0 kubenswrapper[19715]: I0313 12:49:33.731447 19715 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069" exitCode=1 Mar 13 12:49:33.733924 master-0 kubenswrapper[19715]: I0313 12:49:33.733876 19715 generic.go:334] "Generic (PLEG): container finished" podID="730e1f43-39b7-41de-ac81-270966725477" containerID="f9c8b4d625f0aef8e218e4d96fd37c573a6bee5d3051b2b0c36d16b60cba363a" exitCode=0 Mar 13 12:49:33.733924 master-0 kubenswrapper[19715]: I0313 12:49:33.733906 19715 generic.go:334] "Generic (PLEG): container finished" podID="730e1f43-39b7-41de-ac81-270966725477" containerID="399693364fe1a370d24538cb2bf5708b63dd362b46742194b9a96b63a3d6deaf" exitCode=0 Mar 13 12:49:33.736450 master-0 kubenswrapper[19715]: I0313 12:49:33.736399 19715 generic.go:334] "Generic (PLEG): container finished" podID="73dc5747-2d30-4a2d-a784-1dea1e10811d" containerID="f1548edda6fc1651ae68b99d0898df5822866731cd8d5864b19d50d8643d5b08" exitCode=0 Mar 13 12:49:33.739066 master-0 kubenswrapper[19715]: I0313 12:49:33.739015 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/2.log" Mar 13 12:49:33.739434 master-0 kubenswrapper[19715]: I0313 12:49:33.739386 19715 generic.go:334] "Generic (PLEG): container finished" podID="c1213b50-28bf-43ff-94c4-20616907735b" containerID="59e05d7ef9c275462e23676df5f29c2f046e91105d4c6257aa27b85c4193fd57" exitCode=1 Mar 13 12:49:33.744395 master-0 kubenswrapper[19715]: E0313 12:49:33.744352 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:33.750267 master-0 kubenswrapper[19715]: I0313 12:49:33.749663 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0feecf04-574d-4bf6-968d-77dd5c35260b/installer/0.log" Mar 13 12:49:33.750267 master-0 kubenswrapper[19715]: I0313 12:49:33.749720 19715 generic.go:334] "Generic (PLEG): container finished" podID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerID="10be8f9ca4ea6e67dd279190add6bee9a3985f10e4ddcd7b2a1c5c6e9e6e6409" exitCode=1 Mar 13 12:49:33.753358 master-0 kubenswrapper[19715]: I0313 12:49:33.753298 19715 generic.go:334] "Generic (PLEG): container finished" podID="5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0" containerID="2e69b748a2fdfe0cc72146b5f2da55d678257606de7db5ec9d71db1e094acc7b" exitCode=0 Mar 13 12:49:33.758051 master-0 kubenswrapper[19715]: I0313 12:49:33.757996 19715 generic.go:334] "Generic (PLEG): container finished" podID="1929440f-f2cc-450d-80ff-ded6788baa74" containerID="add6080be63d96ac6d15e6ae92fd130acd330b669019c0708be53e9f316105b4" exitCode=0 Mar 13 12:49:33.762238 master-0 kubenswrapper[19715]: I0313 12:49:33.762183 19715 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" exitCode=2 Mar 13 12:49:33.762238 master-0 kubenswrapper[19715]: I0313 12:49:33.762228 19715 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e" exitCode=0 Mar 13 12:49:33.765747 master-0 kubenswrapper[19715]: I0313 12:49:33.765543 19715 generic.go:334] "Generic (PLEG): container finished" podID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerID="3d699a661192c0fe629e3652881a79b8980021e82a7bc93d27f3ce7bd63fd41d" exitCode=0 Mar 13 12:49:33.770450 master-0 kubenswrapper[19715]: I0313 12:49:33.770409 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-w8b7h_20217cff-2f81-4a56-9c15-28385c19258c/package-server-manager/0.log" Mar 13 12:49:33.770905 master-0 kubenswrapper[19715]: I0313 12:49:33.770853 19715 generic.go:334] "Generic (PLEG): container finished" podID="20217cff-2f81-4a56-9c15-28385c19258c" containerID="f380cb6aa96691042a8cede3619ef1bcaa412985b21e3cadd6963fc297c7968d" exitCode=1 Mar 13 12:49:33.773656 master-0 kubenswrapper[19715]: I0313 12:49:33.773507 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-rcfgn_eda319d8-825a-4881-96a9-5386b87f8a4f/manager/0.log" Mar 13 12:49:33.773751 master-0 kubenswrapper[19715]: I0313 12:49:33.773680 19715 generic.go:334] "Generic (PLEG): container finished" podID="eda319d8-825a-4881-96a9-5386b87f8a4f" containerID="cbb2865534497635b5ca625e2074d592be0ad7241931d751a9044f1c282a4c0f" exitCode=1 Mar 13 12:49:33.786615 master-0 kubenswrapper[19715]: I0313 12:49:33.786546 19715 generic.go:334] "Generic (PLEG): container finished" podID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" containerID="31033f934bf0a080278d866d51b314b3816b30909bafd1008ea255c440f36fb0" exitCode=0 Mar 13 12:49:33.788937 master-0 kubenswrapper[19715]: I0313 12:49:33.788912 19715 generic.go:334] "Generic (PLEG): container finished" podID="0d868028-9984-472a-8403-ffed767e1bf8" containerID="8d3d7c80d1f091cb6801c4897cba8089f08217db69ec67d4a437f0167c034ba9" exitCode=0 Mar 13 12:49:33.790897 master-0 kubenswrapper[19715]: I0313 12:49:33.790876 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/1.log" Mar 13 12:49:33.790978 master-0 kubenswrapper[19715]: I0313 12:49:33.790911 19715 generic.go:334] "Generic (PLEG): container finished" podID="b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2" containerID="6fa945ba8a78d2026eeaa5c65617e884ae33b65d477ef3f125c934aff5ce456b" exitCode=255 Mar 13 12:49:33.795451 master-0 kubenswrapper[19715]: E0313 12:49:33.795425 19715 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:49:33.796328 master-0 kubenswrapper[19715]: I0313 12:49:33.796300 19715 generic.go:334] "Generic (PLEG): container finished" podID="1ad68c2d-762a-47ed-bd56-e823a83b9087" containerID="99513d1025df40d0dec85b8d387ea2b55e803e627368de7db4825a3613c52248" exitCode=0 Mar 13 12:49:33.798191 master-0 kubenswrapper[19715]: I0313 12:49:33.798173 19715 generic.go:334] "Generic (PLEG): container finished" podID="1e9803a4-a166-42dc-9498-57e213602684" containerID="0b8ffb9009d34dca0914bb1efe6a7d4b6106f10f28097f2ee3fe0b233ae17b98" exitCode=0 Mar 13 12:49:33.801422 master-0 kubenswrapper[19715]: I0313 12:49:33.801401 19715 generic.go:334] "Generic (PLEG): container finished" podID="7343df96-cba2-477b-8a1b-7af369620440" containerID="2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396" exitCode=0 Mar 13 12:49:33.803092 master-0 kubenswrapper[19715]: I0313 12:49:33.803074 19715 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf" exitCode=0 Mar 13 12:49:33.804877 master-0 kubenswrapper[19715]: I0313 12:49:33.804861 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/3.log" Mar 13 12:49:33.804996 master-0 kubenswrapper[19715]: I0313 12:49:33.804981 19715 generic.go:334] "Generic (PLEG): container finished" podID="1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53" containerID="60ede8da4a56b532dead0c7432bda6cee615e7836acac9f70d47ed4a4d8e1991" exitCode=1 Mar 13 12:49:33.807247 master-0 kubenswrapper[19715]: I0313 12:49:33.807230 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/4.log" Mar 13 12:49:33.807386 master-0 kubenswrapper[19715]: I0313 12:49:33.807369 19715 generic.go:334] "Generic (PLEG): container finished" podID="f2a74c2a-8376-4998-bdc6-02a978f1f568" containerID="1e6e86d8d97923066fabd3383cefd32a185b673f4155ada881e4c10327bf804d" exitCode=255 Mar 13 12:49:33.809253 master-0 kubenswrapper[19715]: I0313 12:49:33.809231 19715 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="63e03be6775769ad765af20dfd2ac68f1e500a160a4e77eda15bd7fdcfe1bc2a" exitCode=0 Mar 13 12:49:33.809390 master-0 kubenswrapper[19715]: I0313 12:49:33.809370 19715 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6" exitCode=0 Mar 13 12:49:33.812443 master-0 kubenswrapper[19715]: I0313 12:49:33.812425 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 12:49:33.812846 master-0 kubenswrapper[19715]: I0313 12:49:33.812826 19715 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66" exitCode=1 Mar 13 12:49:33.812949 master-0 kubenswrapper[19715]: I0313 12:49:33.812935 19715 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700" exitCode=0 Mar 13 12:49:33.814518 master-0 kubenswrapper[19715]: I0313 12:49:33.814500 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-mwnxf_5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/cluster-node-tuning-operator/0.log" Mar 13 12:49:33.814648 master-0 kubenswrapper[19715]: I0313 12:49:33.814632 19715 generic.go:334] "Generic (PLEG): container finished" podID="5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346" containerID="bac301547b48cdecb8c65de938d2eda1a0511b2e5a444761ea88edbc804c54a7" exitCode=1 Mar 13 12:49:33.816361 master-0 kubenswrapper[19715]: I0313 12:49:33.816344 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/manager/0.log" Mar 13 12:49:33.817518 master-0 kubenswrapper[19715]: I0313 12:49:33.817501 19715 generic.go:334] "Generic (PLEG): container finished" podID="a8c840d1-8047-4ad6-a990-3ab119ae1cc5" containerID="30b31c049d6bbc747c9d176a9321b53f132ec100e2bcb266f862f58f0efabb73" exitCode=1 Mar 13 12:49:33.819449 master-0 kubenswrapper[19715]: I0313 12:49:33.819434 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-kb5r7_cf580693-2931-4fef-adb5-b396f7303352/approver/0.log" Mar 13 12:49:33.819910 master-0 kubenswrapper[19715]: I0313 12:49:33.819883 19715 generic.go:334] "Generic (PLEG): container finished" podID="cf580693-2931-4fef-adb5-b396f7303352" containerID="bab02b7b0881c5a887bb7f5e343fcd3261971bd3b26625df2ad95a1d14f0e4fa" exitCode=1 Mar 13 12:49:33.827988 master-0 kubenswrapper[19715]: I0313 12:49:33.827961 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 12:49:33.828500 master-0 kubenswrapper[19715]: I0313 12:49:33.828461 19715 generic.go:334] "Generic (PLEG): container finished" podID="e0763043-3813-43b6-9618-b2d15c942edb" containerID="5ef1c2475f3f7f2424a5113e9ba281fd9a18016393e29411ab9ffb53cc7cc2df" exitCode=1 Mar 13 12:49:33.830481 master-0 kubenswrapper[19715]: I0313 12:49:33.830451 19715 generic.go:334] "Generic (PLEG): container finished" podID="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" containerID="4f24aa6f7ba7f467cc1097431b5fb274879298d3aa1e012c074408a731f35aa0" exitCode=0 Mar 13 12:49:33.830606 master-0 kubenswrapper[19715]: I0313 12:49:33.830586 19715 generic.go:334] "Generic (PLEG): container finished" podID="b6a9184d-0557-4e61-bf31-6dd69c0dfb15" containerID="950288614d40d58ade55b88b69f7304031ba8ba32f85625e94af8a858ab168fc" exitCode=0 Mar 13 12:49:33.838048 master-0 kubenswrapper[19715]: I0313 12:49:33.837995 19715 generic.go:334] "Generic (PLEG): container finished" podID="603fef71-e0cd-4617-bd8a-a55580578c2f" containerID="a593c0e3cdcdc60e311759e5407d46a2222b3d9d443d63f109618c4b09858401" exitCode=0 Mar 13 12:49:33.842349 master-0 kubenswrapper[19715]: I0313 12:49:33.842319 19715 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="3fb27622d9e1b78018dcca13d6addb8dfbd6860890e08e5d986124ea734db4f5" exitCode=0 Mar 13 12:49:33.842349 master-0 kubenswrapper[19715]: I0313 12:49:33.842345 19715 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="4e259bb22c1fb9d57fe107d5100650bf71d49eface516a2d4a5344dcf66f776b" exitCode=0 Mar 13 12:49:33.842506 master-0 kubenswrapper[19715]: I0313 12:49:33.842357 19715 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="80909fea02c110e1d4f337c6de383bf687899cf407ef04ed280f279d0fb78b05" exitCode=0 Mar 13 12:49:33.844419 master-0 kubenswrapper[19715]: I0313 12:49:33.844390 19715 generic.go:334] "Generic (PLEG): container finished" podID="7574e950-de2e-4f90-99d0-eae3b45cd900" containerID="0e678d645097ba94b0c7601c15c6a37574e6aeb92f0646645ec0513c11a7f373" exitCode=0 Mar 13 12:49:33.844596 master-0 kubenswrapper[19715]: E0313 12:49:33.844527 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:33.848634 master-0 kubenswrapper[19715]: I0313 12:49:33.848595 19715 generic.go:334] "Generic (PLEG): container finished" podID="6e55908e-59f3-45a2-82aa-2616c5a2fd52" containerID="7cea7ef63e0a2bbd7a51a61ea7823a56840343f0d56d2b827f3841e4907fb6b2" exitCode=0 Mar 13 12:49:33.851309 master-0 kubenswrapper[19715]: I0313 12:49:33.851269 19715 generic.go:334] "Generic (PLEG): container finished" podID="a6a45be0-19ef-4d36-b8a7-eb2705d24bfa" containerID="4b9e882a01cdfbc8bf7760e0d86d536a94312b94c74000951cc0b9a06f2c288b" exitCode=0 Mar 13 12:49:33.854351 master-0 kubenswrapper[19715]: I0313 12:49:33.854324 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="0552724532a0871797536a0fa5461171eaa5b983641df0c9e3100001409bbe97" exitCode=0 Mar 13 12:49:33.854351 master-0 kubenswrapper[19715]: I0313 12:49:33.854344 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="261cbab4cc990a283086b5578b976b53ce06514cd8246e1d92485867a0760ce8" exitCode=0 Mar 13 12:49:33.854351 master-0 kubenswrapper[19715]: I0313 12:49:33.854351 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="1b6e1b00449d4ad0069d761f09fd31eb925ff8c4773bf223a962c96f72589083" exitCode=0 Mar 13 12:49:33.854351 master-0 kubenswrapper[19715]: I0313 12:49:33.854358 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="68ab991a1ca1a43041140e5538bac0164a9cb6cf676c5102e75b42f612a72d9d" exitCode=0 Mar 13 12:49:33.854611 master-0 kubenswrapper[19715]: I0313 12:49:33.854364 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="9caf396b8c5078621fb7d9a89a4bf5d4e00c4dccbb5c00252204a9ac1a3b5d3b" exitCode=0 Mar 13 12:49:33.854611 master-0 kubenswrapper[19715]: I0313 12:49:33.854371 19715 generic.go:334] "Generic (PLEG): container finished" podID="6d1a0616-4479-4621-b042-36a586bd8248" containerID="18a2b972b6d690603207972c9280fdef39401c1fb14724697481249e3cdd3fe3" exitCode=0 Mar 13 12:49:33.856710 master-0 kubenswrapper[19715]: I0313 12:49:33.856685 19715 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="a96046bbc6e2f7a9efce1073fbf280ed5ef6a4fec79a22f6b7f77fdfe7b84349" exitCode=0 Mar 13 12:49:33.856710 master-0 kubenswrapper[19715]: I0313 12:49:33.856702 19715 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="2f4b310ff7db85ab3ef583a7bbcbdfb4805f7468c4fa6d6fc7e8d6fd0d181697" exitCode=0 Mar 13 12:49:33.856710 master-0 kubenswrapper[19715]: I0313 12:49:33.856709 19715 generic.go:334] "Generic (PLEG): container finished" podID="54c7efc1-6d89-4831-89d6-6f2812c36c36" containerID="a98ec2d4ab9f0fe2fc054a10c78b5e6b8e752b65cb577bb397dd5d71aaf3f3e3" exitCode=0 Mar 13 12:49:33.858664 master-0 kubenswrapper[19715]: I0313 12:49:33.858634 19715 generic.go:334] "Generic (PLEG): container finished" podID="cf9f90f5-643f-41e8-a886-7d19fb064afc" containerID="0d07ff19cee22aeb65be4aac439ca164cb1cde9e958fa7bfb90a8bc5b4af437e" exitCode=0 Mar 13 12:49:33.858664 master-0 kubenswrapper[19715]: I0313 12:49:33.858651 19715 generic.go:334] "Generic (PLEG): container finished" podID="cf9f90f5-643f-41e8-a886-7d19fb064afc" containerID="091751b8e7d456cdc0a088c29fc232cb40bb6927c85d77df8b3128a26c86c4c6" exitCode=0 Mar 13 12:49:33.863151 master-0 kubenswrapper[19715]: I0313 12:49:33.863109 19715 generic.go:334] "Generic (PLEG): container finished" podID="3f66dbf5-722f-4aed-becb-fb1b62ea7fe6" containerID="9611f10b22041823517def90fc354bf396ed36c2da787d15f2b67268e42a0e1b" exitCode=0 Mar 13 12:49:33.864365 master-0 kubenswrapper[19715]: I0313 12:49:33.864340 19715 generic.go:334] "Generic (PLEG): container finished" podID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerID="79cd707206ff99c36a959e487c7685688d55e645d476231af44713218abe6dab" exitCode=0 Mar 13 12:49:33.867622 master-0 kubenswrapper[19715]: I0313 12:49:33.867565 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-fcthv_3b1777e4-6833-4b68-8cdf-ea8b36dbeae9/network-operator/0.log" Mar 13 12:49:33.867740 master-0 kubenswrapper[19715]: I0313 12:49:33.867627 19715 generic.go:334] "Generic (PLEG): container finished" podID="3b1777e4-6833-4b68-8cdf-ea8b36dbeae9" containerID="c15cc561a2dc2cb30249635a38f6de933793bd539f9b4fe8d60280e00e99d819" exitCode=255 Mar 13 12:49:33.869330 master-0 kubenswrapper[19715]: I0313 12:49:33.869285 19715 generic.go:334] "Generic (PLEG): container finished" podID="0943b2db-9658-4a8d-89da-00779d55db6e" containerID="882be8390f3c93b88f969b0da9f7aac073082985655733e890261d7e7b41c713" exitCode=0 Mar 13 12:49:33.870667 master-0 kubenswrapper[19715]: I0313 12:49:33.870594 19715 generic.go:334] "Generic (PLEG): container finished" podID="16c2d774-967f-4964-ab4e-eb13c4364f63" containerID="03adaefddde685072ec465ec3fa62e611b8564796fc923070952faebdeec68f6" exitCode=0 Mar 13 12:49:33.873747 master-0 kubenswrapper[19715]: E0313 12:49:33.873670 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:49:33.875628 master-0 kubenswrapper[19715]: I0313 12:49:33.875080 19715 generic.go:334] "Generic (PLEG): container finished" podID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerID="0a5b3570a3db3335c8eec162d41987493203c31e437d042c22accb68c0ffa63a" exitCode=0 Mar 13 12:49:33.881645 master-0 kubenswrapper[19715]: I0313 12:49:33.881569 19715 generic.go:334] "Generic (PLEG): container finished" podID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerID="712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51" exitCode=0 Mar 13 12:49:33.883894 master-0 kubenswrapper[19715]: I0313 12:49:33.883860 19715 generic.go:334] "Generic (PLEG): container finished" podID="5623ea13-a34b-4510-8902-341912d115df" containerID="82a12e1f6ddb7f481e1349942a599a492f0112c52f7c9c85db4661268c70ed21" exitCode=0 Mar 13 12:49:33.883894 master-0 kubenswrapper[19715]: I0313 12:49:33.883888 19715 generic.go:334] "Generic (PLEG): container finished" podID="5623ea13-a34b-4510-8902-341912d115df" containerID="afcd89fe0d1290aaaef3733e8919ef539e12266d0a9c01b2e1c115fd05956b73" exitCode=0 Mar 13 12:49:33.948606 master-0 kubenswrapper[19715]: E0313 12:49:33.945310 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:33.995946 master-0 kubenswrapper[19715]: E0313 12:49:33.995836 19715 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:49:34.048585 master-0 kubenswrapper[19715]: E0313 12:49:34.045963 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.146550 master-0 kubenswrapper[19715]: E0313 12:49:34.146491 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.263207 master-0 kubenswrapper[19715]: E0313 12:49:34.263056 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.276217 master-0 kubenswrapper[19715]: E0313 12:49:34.276103 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:49:34.363713 master-0 kubenswrapper[19715]: E0313 12:49:34.363588 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.396821 master-0 kubenswrapper[19715]: E0313 12:49:34.396118 19715 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:49:34.448248 master-0 kubenswrapper[19715]: W0313 12:49:34.448142 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:34.448248 master-0 kubenswrapper[19715]: E0313 12:49:34.448238 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:34.464680 master-0 kubenswrapper[19715]: E0313 12:49:34.464622 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.565116 master-0 kubenswrapper[19715]: E0313 12:49:34.565045 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.629161 master-0 kubenswrapper[19715]: I0313 12:49:34.629000 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:34.645461 master-0 kubenswrapper[19715]: W0313 12:49:34.645344 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:34.645461 master-0 kubenswrapper[19715]: E0313 12:49:34.645445 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:34.665842 master-0 kubenswrapper[19715]: E0313 12:49:34.665783 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.766513 master-0 kubenswrapper[19715]: E0313 12:49:34.766399 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.867192 master-0 kubenswrapper[19715]: E0313 12:49:34.867070 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.899226 master-0 kubenswrapper[19715]: W0313 12:49:34.899131 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:34.899226 master-0 kubenswrapper[19715]: E0313 12:49:34.899218 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:34.968413 master-0 kubenswrapper[19715]: E0313 12:49:34.967907 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:34.995700 master-0 kubenswrapper[19715]: W0313 12:49:34.995253 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:34.995700 master-0 kubenswrapper[19715]: E0313 12:49:34.995366 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:35.082857 master-0 kubenswrapper[19715]: E0313 12:49:35.074244 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.091834 master-0 kubenswrapper[19715]: E0313 12:49:35.084034 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:49:35.174479 master-0 kubenswrapper[19715]: E0313 12:49:35.174360 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.196911 master-0 kubenswrapper[19715]: E0313 12:49:35.196834 19715 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:49:35.275293 master-0 kubenswrapper[19715]: E0313 12:49:35.275201 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.375805 master-0 kubenswrapper[19715]: E0313 12:49:35.375719 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.476005 master-0 kubenswrapper[19715]: E0313 12:49:35.475869 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.576534 master-0 kubenswrapper[19715]: E0313 12:49:35.576479 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.630142 master-0 kubenswrapper[19715]: I0313 12:49:35.630056 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:35.677558 master-0 kubenswrapper[19715]: E0313 12:49:35.677457 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.778451 master-0 kubenswrapper[19715]: E0313 12:49:35.778215 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.878372 master-0 kubenswrapper[19715]: E0313 12:49:35.878304 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:35.899847 master-0 kubenswrapper[19715]: I0313 12:49:35.899749 19715 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="9cc438a36a13c0e2e1f239bcab312b0eda7119d2153cef22f48639612d94c13e" exitCode=0 Mar 13 12:49:35.979276 master-0 kubenswrapper[19715]: E0313 12:49:35.979204 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.079874 master-0 kubenswrapper[19715]: E0313 12:49:36.079736 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.180385 master-0 kubenswrapper[19715]: E0313 12:49:36.180323 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.280718 master-0 kubenswrapper[19715]: E0313 12:49:36.280649 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.381908 master-0 kubenswrapper[19715]: E0313 12:49:36.381747 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.482441 master-0 kubenswrapper[19715]: E0313 12:49:36.482340 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.583136 master-0 kubenswrapper[19715]: E0313 12:49:36.583076 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.629394 master-0 kubenswrapper[19715]: I0313 12:49:36.629328 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:36.683328 master-0 kubenswrapper[19715]: E0313 12:49:36.683196 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.686917 master-0 kubenswrapper[19715]: E0313 12:49:36.685457 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:49:36.751503 master-0 kubenswrapper[19715]: W0313 12:49:36.750284 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:36.751503 master-0 kubenswrapper[19715]: E0313 12:49:36.750404 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:36.783481 master-0 kubenswrapper[19715]: E0313 12:49:36.783418 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.796991 master-0 kubenswrapper[19715]: E0313 12:49:36.796934 19715 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 12:49:36.883602 master-0 kubenswrapper[19715]: E0313 12:49:36.883525 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.906512 master-0 kubenswrapper[19715]: I0313 12:49:36.906461 19715 manager.go:324] Recovery completed Mar 13 12:49:36.907213 master-0 kubenswrapper[19715]: I0313 12:49:36.907179 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_80ceb0f9-67e4-4275-8532-85b6602367a2/installer/0.log" Mar 13 12:49:36.907277 master-0 kubenswrapper[19715]: I0313 12:49:36.907245 19715 generic.go:334] "Generic (PLEG): container finished" podID="80ceb0f9-67e4-4275-8532-85b6602367a2" containerID="c83ff937194332d291b1b5b800ca7831144c85fa708fce3eae5e12903a82439b" exitCode=1 Mar 13 12:49:36.971854 master-0 kubenswrapper[19715]: I0313 12:49:36.971724 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:36.977393 master-0 kubenswrapper[19715]: I0313 12:49:36.977336 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:36.977393 master-0 kubenswrapper[19715]: I0313 12:49:36.977387 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:36.977393 master-0 kubenswrapper[19715]: I0313 12:49:36.977400 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:36.980932 master-0 kubenswrapper[19715]: I0313 12:49:36.980899 19715 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 12:49:36.980932 master-0 kubenswrapper[19715]: I0313 12:49:36.980921 19715 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 12:49:36.981056 master-0 kubenswrapper[19715]: I0313 12:49:36.980954 19715 state_mem.go:36] "Initialized new in-memory state store" Mar 13 12:49:36.981153 master-0 kubenswrapper[19715]: I0313 12:49:36.981129 19715 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 12:49:36.981200 master-0 kubenswrapper[19715]: I0313 12:49:36.981146 19715 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 12:49:36.981200 master-0 kubenswrapper[19715]: I0313 12:49:36.981180 19715 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 12:49:36.981200 master-0 kubenswrapper[19715]: I0313 12:49:36.981187 19715 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 12:49:36.981200 master-0 kubenswrapper[19715]: I0313 12:49:36.981194 19715 policy_none.go:49] "None policy: Start" Mar 13 12:49:36.984752 master-0 kubenswrapper[19715]: E0313 12:49:36.984187 19715 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 12:49:36.986402 master-0 kubenswrapper[19715]: I0313 12:49:36.985545 19715 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 12:49:36.986402 master-0 kubenswrapper[19715]: I0313 12:49:36.985617 19715 state_mem.go:35] "Initializing new in-memory state store" Mar 13 12:49:36.986402 master-0 kubenswrapper[19715]: I0313 12:49:36.985839 19715 state_mem.go:75] "Updated machine memory state" Mar 13 12:49:36.986402 master-0 kubenswrapper[19715]: I0313 12:49:36.985861 19715 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 12:49:36.996798 master-0 kubenswrapper[19715]: I0313 12:49:36.996756 19715 manager.go:334] "Starting Device Plugin manager" Mar 13 12:49:36.996916 master-0 kubenswrapper[19715]: I0313 12:49:36.996839 19715 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 12:49:36.996916 master-0 kubenswrapper[19715]: I0313 12:49:36.996864 19715 server.go:79] "Starting device plugin registration server" Mar 13 12:49:36.997306 master-0 kubenswrapper[19715]: I0313 12:49:36.997271 19715 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 12:49:36.997394 master-0 kubenswrapper[19715]: I0313 12:49:36.997297 19715 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 12:49:36.997928 master-0 kubenswrapper[19715]: I0313 12:49:36.997889 19715 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 12:49:36.998011 master-0 kubenswrapper[19715]: I0313 12:49:36.998003 19715 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 12:49:36.998058 master-0 kubenswrapper[19715]: I0313 12:49:36.998014 19715 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 12:49:37.005608 master-0 kubenswrapper[19715]: E0313 12:49:37.005408 19715 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:49:37.026641 master-0 kubenswrapper[19715]: W0313 12:49:37.026306 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:37.026875 master-0 kubenswrapper[19715]: E0313 12:49:37.026670 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:37.097501 master-0 kubenswrapper[19715]: I0313 12:49:37.097407 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:37.100374 master-0 kubenswrapper[19715]: I0313 12:49:37.100330 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:37.100374 master-0 kubenswrapper[19715]: I0313 12:49:37.100372 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:37.100540 master-0 kubenswrapper[19715]: I0313 12:49:37.100385 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:37.100540 master-0 kubenswrapper[19715]: I0313 12:49:37.100406 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:37.101180 master-0 kubenswrapper[19715]: E0313 12:49:37.101134 19715 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:49:37.302406 master-0 kubenswrapper[19715]: I0313 12:49:37.302251 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:37.304385 master-0 kubenswrapper[19715]: I0313 12:49:37.304344 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:37.304385 master-0 kubenswrapper[19715]: I0313 12:49:37.304377 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:37.304385 master-0 kubenswrapper[19715]: I0313 12:49:37.304385 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:37.304385 master-0 kubenswrapper[19715]: I0313 12:49:37.304405 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:37.305089 master-0 kubenswrapper[19715]: E0313 12:49:37.305042 19715 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:49:37.359730 master-0 kubenswrapper[19715]: W0313 12:49:37.359632 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:37.359730 master-0 kubenswrapper[19715]: E0313 12:49:37.359715 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:37.629562 master-0 kubenswrapper[19715]: I0313 12:49:37.629407 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:37.705290 master-0 kubenswrapper[19715]: I0313 12:49:37.705198 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:37.708209 master-0 kubenswrapper[19715]: I0313 12:49:37.708177 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:37.708209 master-0 kubenswrapper[19715]: I0313 12:49:37.708212 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:37.708323 master-0 kubenswrapper[19715]: I0313 12:49:37.708222 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:37.708323 master-0 kubenswrapper[19715]: I0313 12:49:37.708244 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:37.709118 master-0 kubenswrapper[19715]: E0313 12:49:37.709068 19715 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:49:38.095478 master-0 kubenswrapper[19715]: W0313 12:49:38.095382 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:38.095478 master-0 kubenswrapper[19715]: E0313 12:49:38.095475 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:38.510033 master-0 kubenswrapper[19715]: I0313 12:49:38.509937 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:38.514538 master-0 kubenswrapper[19715]: I0313 12:49:38.514486 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:38.514899 master-0 kubenswrapper[19715]: I0313 12:49:38.514881 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:38.515189 master-0 kubenswrapper[19715]: I0313 12:49:38.515176 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:38.515294 master-0 kubenswrapper[19715]: I0313 12:49:38.515278 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:38.517310 master-0 kubenswrapper[19715]: E0313 12:49:38.517241 19715 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:49:38.630330 master-0 kubenswrapper[19715]: I0313 12:49:38.630227 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:38.810627 master-0 kubenswrapper[19715]: E0313 12:49:38.810353 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c6784dd974c8e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:49:33.62815707 +0000 UTC m=+0.194829827,LastTimestamp:2026-03-13 12:49:33.62815707 +0000 UTC m=+0.194829827,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:49:39.629999 master-0 kubenswrapper[19715]: I0313 12:49:39.629906 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:39.888367 master-0 kubenswrapper[19715]: E0313 12:49:39.888299 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:49:39.997467 master-0 kubenswrapper[19715]: I0313 12:49:39.997345 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:49:39.997718 master-0 kubenswrapper[19715]: I0313 12:49:39.997525 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.000478 master-0 kubenswrapper[19715]: I0313 12:49:40.000438 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.000478 master-0 kubenswrapper[19715]: I0313 12:49:40.000476 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.000628 master-0 kubenswrapper[19715]: I0313 12:49:40.000486 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.000628 master-0 kubenswrapper[19715]: I0313 12:49:40.000601 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.000832 master-0 kubenswrapper[19715]: I0313 12:49:40.000796 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.003162 master-0 kubenswrapper[19715]: I0313 12:49:40.003106 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.003248 master-0 kubenswrapper[19715]: I0313 12:49:40.003168 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.003248 master-0 kubenswrapper[19715]: I0313 12:49:40.003118 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.003248 master-0 kubenswrapper[19715]: I0313 12:49:40.003233 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.003248 master-0 kubenswrapper[19715]: I0313 12:49:40.003249 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.003448 master-0 kubenswrapper[19715]: I0313 12:49:40.003180 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.003448 master-0 kubenswrapper[19715]: I0313 12:49:40.003416 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.003654 master-0 kubenswrapper[19715]: I0313 12:49:40.003553 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.006061 master-0 kubenswrapper[19715]: I0313 12:49:40.006031 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.006173 master-0 kubenswrapper[19715]: I0313 12:49:40.006069 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.006173 master-0 kubenswrapper[19715]: I0313 12:49:40.006088 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.006173 master-0 kubenswrapper[19715]: I0313 12:49:40.006031 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.006173 master-0 kubenswrapper[19715]: I0313 12:49:40.006147 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.006173 master-0 kubenswrapper[19715]: I0313 12:49:40.006161 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.006395 master-0 kubenswrapper[19715]: I0313 12:49:40.006262 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.006461 master-0 kubenswrapper[19715]: I0313 12:49:40.006434 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.008461 master-0 kubenswrapper[19715]: I0313 12:49:40.008427 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.008461 master-0 kubenswrapper[19715]: I0313 12:49:40.008458 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.008621 master-0 kubenswrapper[19715]: I0313 12:49:40.008468 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.008621 master-0 kubenswrapper[19715]: I0313 12:49:40.008619 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.008693 master-0 kubenswrapper[19715]: I0313 12:49:40.008659 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.008725 master-0 kubenswrapper[19715]: I0313 12:49:40.008695 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.008725 master-0 kubenswrapper[19715]: I0313 12:49:40.008708 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.008797 master-0 kubenswrapper[19715]: I0313 12:49:40.008765 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.010617 master-0 kubenswrapper[19715]: I0313 12:49:40.010563 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.010617 master-0 kubenswrapper[19715]: I0313 12:49:40.010610 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.010617 master-0 kubenswrapper[19715]: I0313 12:49:40.010620 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.010774 master-0 kubenswrapper[19715]: I0313 12:49:40.010707 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.010774 master-0 kubenswrapper[19715]: I0313 12:49:40.010739 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.010774 master-0 kubenswrapper[19715]: I0313 12:49:40.010747 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.010888 master-0 kubenswrapper[19715]: I0313 12:49:40.010752 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.010888 master-0 kubenswrapper[19715]: I0313 12:49:40.010876 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.010998 master-0 kubenswrapper[19715]: I0313 12:49:40.010930 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.014754 master-0 kubenswrapper[19715]: I0313 12:49:40.014471 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.014754 master-0 kubenswrapper[19715]: I0313 12:49:40.014530 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.014754 master-0 kubenswrapper[19715]: I0313 12:49:40.014546 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.015211 master-0 kubenswrapper[19715]: I0313 12:49:40.014801 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="90279ca564e83f63eaf1b9ddebe2c2557bd9c27dd880ed894a069d9a79f4f270" Mar 13 12:49:40.015765 master-0 kubenswrapper[19715]: I0313 12:49:40.015651 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7"} Mar 13 12:49:40.015765 master-0 kubenswrapper[19715]: I0313 12:49:40.015756 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069"} Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015783 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04"} Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015830 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9689167e4adfbea953806301dad86365ee4722270dda306dcdfea611bbd4abda" Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015862 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"dd9e5e8e374c81e1c66f6e45811bee38c8f529d7dd83812725266a3311710c8f"} Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015876 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561"} Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015889 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e"} Mar 13 12:49:40.015911 master-0 kubenswrapper[19715]: I0313 12:49:40.015903 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.015930 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c01927a76a297da5840d73eff9921d3c26cf5f0e7c0b06e61b8b4a6964b05b8" Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016047 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"ed78e1786123e1fdf666e037202049096483e9131a9b2ba5d12c1d669373c1fa"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016072 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"fa04bd1d8b838a856ef3334cc68d9da0449dbf549bcd199af5292664d8bc9f66"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016086 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"68b6f8966a17045ff6a5d27e4da4e48714a155c30c56d6be16050ed7473f6700"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016099 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"b46a9d0ffbc090b147507e1248316eff71045a30827890533e43f4fb86d60226"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016183 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"6c23cdc6601b96e6cd9e782c6c966a61626e147eafd04b183861551e09d61efd"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016200 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"cb9739e016267022e31ecd49dd353d0fbb312c39344f7ea6d0f628422bd671c7"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016226 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"bbb69daa8aec5294f802e0fd923d615b5bd9b54c9ad727dc130730d0148b4189"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016242 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"957fff4e2e7c0e348e670f1e1bbd14c0b7e69017fdbbdc00acbf646f8f370e16"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016259 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"93fcfaf9014e1420b0964c2d7727fae3c21363a8bdefab275c2ffbb8ad00d4b9"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016276 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"3fb27622d9e1b78018dcca13d6addb8dfbd6860890e08e5d986124ea734db4f5"} Mar 13 12:49:40.016270 master-0 kubenswrapper[19715]: I0313 12:49:40.016290 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"4e259bb22c1fb9d57fe107d5100650bf71d49eface516a2d4a5344dcf66f776b"} Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016302 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"80909fea02c110e1d4f337c6de383bf687899cf407ef04ed280f279d0fb78b05"} Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016320 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"e6d506914f674acae7c420a21d64287e5d50a2208f22be2bad24040b690bdfea"} Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016412 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d2b45e42e0e063443f8930f6b7d09a6d020a634d13e2cb7c2ed7329e003e782" Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016422 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016457 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c009957e6c0e1187ad15c0418c800a108103fed32e75490f5bcdf096c17f2c6" Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016485 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.016929 master-0 kubenswrapper[19715]: I0313 12:49:40.016500 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.017552 master-0 kubenswrapper[19715]: I0313 12:49:40.017507 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.017552 master-0 kubenswrapper[19715]: I0313 12:49:40.017546 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.019695 master-0 kubenswrapper[19715]: I0313 12:49:40.019659 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.019695 master-0 kubenswrapper[19715]: I0313 12:49:40.019695 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.019841 master-0 kubenswrapper[19715]: I0313 12:49:40.019708 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.118077 master-0 kubenswrapper[19715]: I0313 12:49:40.117997 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.120892 master-0 kubenswrapper[19715]: I0313 12:49:40.120855 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.120978 master-0 kubenswrapper[19715]: I0313 12:49:40.120895 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.120978 master-0 kubenswrapper[19715]: I0313 12:49:40.120907 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.120978 master-0 kubenswrapper[19715]: I0313 12:49:40.120928 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:40.121862 master-0 kubenswrapper[19715]: E0313 12:49:40.121804 19715 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 12:49:40.128444 master-0 kubenswrapper[19715]: I0313 12:49:40.128358 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.128543 master-0 kubenswrapper[19715]: I0313 12:49:40.128451 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.128543 master-0 kubenswrapper[19715]: I0313 12:49:40.128509 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.128543 master-0 kubenswrapper[19715]: I0313 12:49:40.128532 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.128699 master-0 kubenswrapper[19715]: I0313 12:49:40.128560 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.128699 master-0 kubenswrapper[19715]: I0313 12:49:40.128606 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.128699 master-0 kubenswrapper[19715]: I0313 12:49:40.128628 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.128699 master-0 kubenswrapper[19715]: I0313 12:49:40.128659 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.128817 master-0 kubenswrapper[19715]: I0313 12:49:40.128762 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.128866 master-0 kubenswrapper[19715]: I0313 12:49:40.128843 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.128902 master-0 kubenswrapper[19715]: I0313 12:49:40.128872 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.128902 master-0 kubenswrapper[19715]: I0313 12:49:40.128890 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.128983 master-0 kubenswrapper[19715]: I0313 12:49:40.128907 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.128983 master-0 kubenswrapper[19715]: I0313 12:49:40.128931 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.128983 master-0 kubenswrapper[19715]: I0313 12:49:40.128949 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.129087 master-0 kubenswrapper[19715]: I0313 12:49:40.129015 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.129121 master-0 kubenswrapper[19715]: I0313 12:49:40.129078 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.129164 master-0 kubenswrapper[19715]: I0313 12:49:40.129121 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.129164 master-0 kubenswrapper[19715]: I0313 12:49:40.129153 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.129239 master-0 kubenswrapper[19715]: I0313 12:49:40.129183 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.129239 master-0 kubenswrapper[19715]: I0313 12:49:40.129211 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.129319 master-0 kubenswrapper[19715]: I0313 12:49:40.129244 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.129319 master-0 kubenswrapper[19715]: I0313 12:49:40.129284 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.230803 master-0 kubenswrapper[19715]: I0313 12:49:40.230636 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.230803 master-0 kubenswrapper[19715]: I0313 12:49:40.230704 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.230803 master-0 kubenswrapper[19715]: I0313 12:49:40.230733 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.230803 master-0 kubenswrapper[19715]: I0313 12:49:40.230786 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.230803 master-0 kubenswrapper[19715]: I0313 12:49:40.230790 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230825 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230857 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230867 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230882 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230904 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230903 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230928 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.230950 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231005 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231045 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231059 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231063 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231064 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231147 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231098 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231100 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231195 master-0 kubenswrapper[19715]: I0313 12:49:40.231126 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231229 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231264 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231287 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231339 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231331 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231390 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231400 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231439 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231461 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231487 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231494 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231510 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231509 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231531 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231542 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231550 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231603 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231634 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231655 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231686 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231716 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231743 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.231899 master-0 kubenswrapper[19715]: I0313 12:49:40.231773 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.232781 master-0 kubenswrapper[19715]: I0313 12:49:40.231935 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.304317 master-0 kubenswrapper[19715]: I0313 12:49:40.304231 19715 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:49:40.312606 master-0 kubenswrapper[19715]: I0313 12:49:40.312176 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.318872 master-0 kubenswrapper[19715]: I0313 12:49:40.318708 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:40.323409 master-0 kubenswrapper[19715]: I0313 12:49:40.323223 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:49:40.328712 master-0 kubenswrapper[19715]: I0313 12:49:40.328667 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:40.378874 master-0 kubenswrapper[19715]: I0313 12:49:40.378818 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.378874 master-0 kubenswrapper[19715]: I0313 12:49:40.378881 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:40.629509 master-0 kubenswrapper[19715]: I0313 12:49:40.629459 19715 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:40.797980 master-0 kubenswrapper[19715]: W0313 12:49:40.797755 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:40.797980 master-0 kubenswrapper[19715]: E0313 12:49:40.797844 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:40.845211 master-0 kubenswrapper[19715]: W0313 12:49:40.845136 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:40.845211 master-0 kubenswrapper[19715]: E0313 12:49:40.845214 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:40.937421 master-0 kubenswrapper[19715]: I0313 12:49:40.937355 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72"} Mar 13 12:49:40.937421 master-0 kubenswrapper[19715]: I0313 12:49:40.937390 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.940632 master-0 kubenswrapper[19715]: I0313 12:49:40.940466 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d"} Mar 13 12:49:40.940632 master-0 kubenswrapper[19715]: I0313 12:49:40.940548 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"8a583731137a7385a7d532038d435513937679e616bcfde43920b2ae98beb9e5"} Mar 13 12:49:40.940911 master-0 kubenswrapper[19715]: I0313 12:49:40.940690 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.940911 master-0 kubenswrapper[19715]: I0313 12:49:40.940749 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.940911 master-0 kubenswrapper[19715]: I0313 12:49:40.940709 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.940911 master-0 kubenswrapper[19715]: I0313 12:49:40.940762 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.942781 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" exitCode=0 Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.942990 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.943571 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.944950 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerDied","Data":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.944996 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"92a5b2c43b1bd9212e7194d85625acb1507dbe44b58cc06b876674667a312eb6"} Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.945069 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.945159 master-0 kubenswrapper[19715]: I0313 12:49:40.945104 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.946298 master-0 kubenswrapper[19715]: I0313 12:49:40.945188 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.946298 master-0 kubenswrapper[19715]: I0313 12:49:40.945108 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.946298 master-0 kubenswrapper[19715]: I0313 12:49:40.945482 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.955204 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.955294 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.955323 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.956156 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.956208 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.956243 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.957313 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.957365 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.957704 master-0 kubenswrapper[19715]: I0313 12:49:40.957392 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.960704 master-0 kubenswrapper[19715]: I0313 12:49:40.960638 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.960704 master-0 kubenswrapper[19715]: I0313 12:49:40.960703 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.961244 master-0 kubenswrapper[19715]: I0313 12:49:40.960731 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:40.963318 master-0 kubenswrapper[19715]: I0313 12:49:40.963245 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:40.967149 master-0 kubenswrapper[19715]: I0313 12:49:40.967072 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:40.967149 master-0 kubenswrapper[19715]: I0313 12:49:40.967123 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:40.967149 master-0 kubenswrapper[19715]: I0313 12:49:40.967138 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:41.357457 master-0 kubenswrapper[19715]: W0313 12:49:41.357261 19715 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:49:41.357675 master-0 kubenswrapper[19715]: E0313 12:49:41.357470 19715 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 12:49:42.175596 master-0 kubenswrapper[19715]: I0313 12:49:42.175519 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} Mar 13 12:49:42.175596 master-0 kubenswrapper[19715]: I0313 12:49:42.175592 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} Mar 13 12:49:42.190243 master-0 kubenswrapper[19715]: I0313 12:49:42.190181 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72"} Mar 13 12:49:42.190385 master-0 kubenswrapper[19715]: I0313 12:49:42.190259 19715 scope.go:117] "RemoveContainer" containerID="17ffb6635ab55a5efe5a1e80fe730b0d2a4c7fb2067273663ff23cce6b3d9561" Mar 13 12:49:42.190385 master-0 kubenswrapper[19715]: I0313 12:49:42.190281 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:42.190709 master-0 kubenswrapper[19715]: I0313 12:49:42.190096 19715 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" exitCode=1 Mar 13 12:49:42.190987 master-0 kubenswrapper[19715]: I0313 12:49:42.190952 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:42.194447 master-0 kubenswrapper[19715]: I0313 12:49:42.194388 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:42.194657 master-0 kubenswrapper[19715]: I0313 12:49:42.194461 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:42.194657 master-0 kubenswrapper[19715]: I0313 12:49:42.194498 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:42.196316 master-0 kubenswrapper[19715]: I0313 12:49:42.195243 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:42.196316 master-0 kubenswrapper[19715]: I0313 12:49:42.196220 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:42.196316 master-0 kubenswrapper[19715]: I0313 12:49:42.196265 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:42.196316 master-0 kubenswrapper[19715]: I0313 12:49:42.196276 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:42.197996 master-0 kubenswrapper[19715]: E0313 12:49:42.195609 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:42.802299 master-0 kubenswrapper[19715]: I0313 12:49:42.801264 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:43.403295 master-0 kubenswrapper[19715]: I0313 12:49:43.402932 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:43.410047 master-0 kubenswrapper[19715]: I0313 12:49:43.409972 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:43.410047 master-0 kubenswrapper[19715]: I0313 12:49:43.410037 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:43.410047 master-0 kubenswrapper[19715]: I0313 12:49:43.410050 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:43.410375 master-0 kubenswrapper[19715]: I0313 12:49:43.410082 19715 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 12:49:43.428767 master-0 kubenswrapper[19715]: I0313 12:49:43.428703 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} Mar 13 12:49:43.428767 master-0 kubenswrapper[19715]: I0313 12:49:43.428755 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} Mar 13 12:49:43.428767 master-0 kubenswrapper[19715]: I0313 12:49:43.428770 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} Mar 13 12:49:43.429124 master-0 kubenswrapper[19715]: I0313 12:49:43.428909 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:43.431204 master-0 kubenswrapper[19715]: I0313 12:49:43.431160 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:43.431204 master-0 kubenswrapper[19715]: I0313 12:49:43.431190 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:43.431204 master-0 kubenswrapper[19715]: I0313 12:49:43.431199 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:43.444861 master-0 kubenswrapper[19715]: I0313 12:49:43.444788 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:43.460557 master-0 kubenswrapper[19715]: I0313 12:49:43.460484 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:43.460557 master-0 kubenswrapper[19715]: I0313 12:49:43.460537 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:43.460557 master-0 kubenswrapper[19715]: I0313 12:49:43.460548 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:43.461071 master-0 kubenswrapper[19715]: I0313 12:49:43.461037 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:43.461338 master-0 kubenswrapper[19715]: E0313 12:49:43.461300 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:44.037640 master-0 kubenswrapper[19715]: I0313 12:49:44.037096 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:44.050483 master-0 kubenswrapper[19715]: I0313 12:49:44.049755 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:44.450166 master-0 kubenswrapper[19715]: I0313 12:49:44.450101 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:44.450166 master-0 kubenswrapper[19715]: I0313 12:49:44.450165 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:44.452252 master-0 kubenswrapper[19715]: I0313 12:49:44.452229 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:44.454121 master-0 kubenswrapper[19715]: I0313 12:49:44.454091 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:44.454202 master-0 kubenswrapper[19715]: I0313 12:49:44.454125 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:44.454202 master-0 kubenswrapper[19715]: I0313 12:49:44.454135 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:44.454598 master-0 kubenswrapper[19715]: I0313 12:49:44.454558 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:44.454859 master-0 kubenswrapper[19715]: E0313 12:49:44.454819 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:44.455067 master-0 kubenswrapper[19715]: I0313 12:49:44.455049 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:44.455143 master-0 kubenswrapper[19715]: I0313 12:49:44.455073 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:44.455143 master-0 kubenswrapper[19715]: I0313 12:49:44.455084 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:45.319856 master-0 kubenswrapper[19715]: I0313 12:49:45.319758 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:45.320141 master-0 kubenswrapper[19715]: I0313 12:49:45.320112 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:45.325527 master-0 kubenswrapper[19715]: I0313 12:49:45.325472 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:45.455937 master-0 kubenswrapper[19715]: I0313 12:49:45.455873 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:45.455937 master-0 kubenswrapper[19715]: I0313 12:49:45.455912 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:45.456542 master-0 kubenswrapper[19715]: I0313 12:49:45.455972 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:45.456542 master-0 kubenswrapper[19715]: I0313 12:49:45.455933 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:45.459665 master-0 kubenswrapper[19715]: I0313 12:49:45.458911 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:45.459665 master-0 kubenswrapper[19715]: I0313 12:49:45.458947 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:45.459665 master-0 kubenswrapper[19715]: I0313 12:49:45.458957 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:45.459665 master-0 kubenswrapper[19715]: I0313 12:49:45.459340 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:45.459665 master-0 kubenswrapper[19715]: E0313 12:49:45.459617 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:45.460628 master-0 kubenswrapper[19715]: I0313 12:49:45.459998 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:45.460628 master-0 kubenswrapper[19715]: I0313 12:49:45.460028 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:45.460628 master-0 kubenswrapper[19715]: I0313 12:49:45.460036 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:45.460889 master-0 kubenswrapper[19715]: I0313 12:49:45.460859 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:46.404455 master-0 kubenswrapper[19715]: I0313 12:49:46.404350 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:46.409722 master-0 kubenswrapper[19715]: I0313 12:49:46.409670 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:46.466297 master-0 kubenswrapper[19715]: I0313 12:49:46.466204 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:46.466297 master-0 kubenswrapper[19715]: I0313 12:49:46.466245 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:46.466908 master-0 kubenswrapper[19715]: I0313 12:49:46.466328 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:46.469669 master-0 kubenswrapper[19715]: I0313 12:49:46.469636 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:46.469669 master-0 kubenswrapper[19715]: I0313 12:49:46.469661 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:46.469800 master-0 kubenswrapper[19715]: I0313 12:49:46.469678 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:46.469800 master-0 kubenswrapper[19715]: I0313 12:49:46.469689 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:46.469800 master-0 kubenswrapper[19715]: I0313 12:49:46.469690 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:46.469800 master-0 kubenswrapper[19715]: I0313 12:49:46.469783 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:46.470059 master-0 kubenswrapper[19715]: I0313 12:49:46.470036 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:46.470263 master-0 kubenswrapper[19715]: E0313 12:49:46.470237 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:47.005602 master-0 kubenswrapper[19715]: E0313 12:49:47.005527 19715 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 12:49:47.092449 master-0 kubenswrapper[19715]: I0313 12:49:47.092370 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:47.093060 master-0 kubenswrapper[19715]: I0313 12:49:47.092688 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:47.096089 master-0 kubenswrapper[19715]: I0313 12:49:47.095939 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:47.097625 master-0 kubenswrapper[19715]: I0313 12:49:47.097094 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:47.097625 master-0 kubenswrapper[19715]: I0313 12:49:47.097124 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:47.103552 master-0 kubenswrapper[19715]: I0313 12:49:47.102557 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:47.109760 master-0 kubenswrapper[19715]: I0313 12:49:47.109729 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 12:49:47.471713 master-0 kubenswrapper[19715]: I0313 12:49:47.471665 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:47.471713 master-0 kubenswrapper[19715]: I0313 12:49:47.471696 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:47.472349 master-0 kubenswrapper[19715]: I0313 12:49:47.471696 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:47.472349 master-0 kubenswrapper[19715]: I0313 12:49:47.471854 19715 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 12:49:47.478921 master-0 kubenswrapper[19715]: I0313 12:49:47.478875 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:47.478921 master-0 kubenswrapper[19715]: I0313 12:49:47.478907 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.478944 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.478954 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.478920 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.479003 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.479119 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.479143 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 12:49:47.479221 master-0 kubenswrapper[19715]: I0313 12:49:47.479155 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 12:49:47.479557 master-0 kubenswrapper[19715]: I0313 12:49:47.479467 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:47.479826 master-0 kubenswrapper[19715]: E0313 12:49:47.479790 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:48.331979 master-0 kubenswrapper[19715]: I0313 12:49:48.331923 19715 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:49:48.492915 master-0 kubenswrapper[19715]: E0313 12:49:48.492813 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:48.493538 master-0 kubenswrapper[19715]: I0313 12:49:48.493091 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:48.493538 master-0 kubenswrapper[19715]: E0313 12:49:48.493351 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:48.634096 master-0 kubenswrapper[19715]: I0313 12:49:48.634065 19715 apiserver.go:52] "Watching apiserver" Mar 13 12:49:48.654883 master-0 kubenswrapper[19715]: I0313 12:49:48.654759 19715 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:49:48.662810 master-0 kubenswrapper[19715]: I0313 12:49:48.662627 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-jjmb8","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv","openshift-kube-apiserver/installer-1-retry-1-master-0","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz","openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-kube-apiserver/installer-1-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828","openshift-cluster-node-tuning-operator/tuned-d7h2t","openshift-etcd/etcd-master-0","openshift-ingress-operator/ingress-operator-677db989d6-9nxcz","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-node-identity/network-node-identity-kb5r7","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh","openshift-dns/dns-default-qh2tf","openshift-kube-controller-manager/installer-2-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-7wnld","openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k","openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b","openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt","openshift-marketplace/redhat-marketplace-92rsn","assisted-installer/assisted-installer-controller-7vm6x","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8","openshift-service-ca/service-ca-84bfdbbb7f-cgw5c","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg","openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2","openshift-marketplace/redhat-operators-28fdg","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577","openshift-machine-config-operator/machine-config-daemon-mlgxw","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f","openshift-kube-scheduler/installer-5-master-0","openshift-multus/multus-additional-cni-plugins-wl6w4","openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn","openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr","openshift-config-operator/openshift-config-operator-64488f9d78-tml9z","openshift-dns-operator/dns-operator-589895fbb7-w7mv2","openshift-etcd/installer-2-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z","openshift-marketplace/community-operators-6w8hd","openshift-network-operator/iptables-alerter-456r5","openshift-apiserver/apiserver-8459d5b549-n9fzj","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn","openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn","kube-system/bootstrap-kube-scheduler-master-0","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf","openshift-multus/multus-admission-controller-8d675b596-pbgd4","openshift-multus/network-metrics-daemon-ztpxf","openshift-ovn-kubernetes/ovnkube-node-vlrf6","openshift-insights/insights-operator-8f89dfddd-s4gd8","openshift-kube-scheduler/installer-4-master-0","openshift-network-operator/network-operator-7c649bf6d4-fcthv","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h","openshift-dns/node-resolver-5jth9","openshift-marketplace/certified-operators-6vng8","openshift-multus/multus-6c7r9","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56","openshift-etcd/installer-1-master-0"] Mar 13 12:49:48.663136 master-0 kubenswrapper[19715]: I0313 12:49:48.663092 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-7vm6x" Mar 13 12:49:48.671159 master-0 kubenswrapper[19715]: I0313 12:49:48.671107 19715 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="2b8c45bc-5067-46b9-81e7-094fb4025979" Mar 13 12:49:48.675516 master-0 kubenswrapper[19715]: I0313 12:49:48.675460 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 12:49:48.675916 master-0 kubenswrapper[19715]: I0313 12:49:48.675879 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 12:49:48.676511 master-0 kubenswrapper[19715]: I0313 12:49:48.676436 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:49:48.676714 master-0 kubenswrapper[19715]: I0313 12:49:48.676655 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 12:49:48.676949 master-0 kubenswrapper[19715]: I0313 12:49:48.676888 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.677047 master-0 kubenswrapper[19715]: I0313 12:49:48.677028 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:49:48.677258 master-0 kubenswrapper[19715]: I0313 12:49:48.677168 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:49:48.677663 master-0 kubenswrapper[19715]: I0313 12:49:48.677495 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 12:49:48.678804 master-0 kubenswrapper[19715]: I0313 12:49:48.678758 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:49:48.697308 master-0 kubenswrapper[19715]: I0313 12:49:48.697267 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.697619 master-0 kubenswrapper[19715]: I0313 12:49:48.697556 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:49:48.697701 master-0 kubenswrapper[19715]: I0313 12:49:48.697606 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:49:48.697748 master-0 kubenswrapper[19715]: I0313 12:49:48.697620 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.697884 master-0 kubenswrapper[19715]: I0313 12:49:48.697281 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:49:48.698081 master-0 kubenswrapper[19715]: I0313 12:49:48.698057 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:49:48.698149 master-0 kubenswrapper[19715]: I0313 12:49:48.698121 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.698218 master-0 kubenswrapper[19715]: I0313 12:49:48.698187 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:49:48.698218 master-0 kubenswrapper[19715]: I0313 12:49:48.698217 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:49:48.698308 master-0 kubenswrapper[19715]: I0313 12:49:48.698232 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:49:48.698308 master-0 kubenswrapper[19715]: I0313 12:49:48.698127 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-78fwj" Mar 13 12:49:48.698308 master-0 kubenswrapper[19715]: I0313 12:49:48.698284 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.698562 master-0 kubenswrapper[19715]: I0313 12:49:48.698466 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:49:48.698562 master-0 kubenswrapper[19715]: I0313 12:49:48.698476 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.698661 master-0 kubenswrapper[19715]: I0313 12:49:48.698562 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:49:48.698898 master-0 kubenswrapper[19715]: I0313 12:49:48.698869 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:49:48.698973 master-0 kubenswrapper[19715]: I0313 12:49:48.698930 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:49:48.699474 master-0 kubenswrapper[19715]: I0313 12:49:48.699422 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:49:48.699692 master-0 kubenswrapper[19715]: I0313 12:49:48.699648 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:49:48.699754 master-0 kubenswrapper[19715]: I0313 12:49:48.699726 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:49:48.699948 master-0 kubenswrapper[19715]: I0313 12:49:48.699923 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:49:48.700004 master-0 kubenswrapper[19715]: I0313 12:49:48.699980 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:49:48.700120 master-0 kubenswrapper[19715]: I0313 12:49:48.699930 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:49:48.700513 master-0 kubenswrapper[19715]: I0313 12:49:48.700484 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.700614 master-0 kubenswrapper[19715]: I0313 12:49:48.700522 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:49:48.700682 master-0 kubenswrapper[19715]: I0313 12:49:48.700655 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:49:48.700785 master-0 kubenswrapper[19715]: I0313 12:49:48.700744 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:49:48.700891 master-0 kubenswrapper[19715]: I0313 12:49:48.700845 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:49:48.700961 master-0 kubenswrapper[19715]: I0313 12:49:48.700893 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:49:48.701027 master-0 kubenswrapper[19715]: I0313 12:49:48.700993 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:49:48.701027 master-0 kubenswrapper[19715]: I0313 12:49:48.701016 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:49:48.701135 master-0 kubenswrapper[19715]: I0313 12:49:48.701107 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:49:48.701188 master-0 kubenswrapper[19715]: I0313 12:49:48.701141 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:49:48.701266 master-0 kubenswrapper[19715]: I0313 12:49:48.701232 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:49:48.701315 master-0 kubenswrapper[19715]: I0313 12:49:48.701281 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:49:48.701362 master-0 kubenswrapper[19715]: I0313 12:49:48.701321 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:49:48.701440 master-0 kubenswrapper[19715]: I0313 12:49:48.701405 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:49:48.701624 master-0 kubenswrapper[19715]: I0313 12:49:48.701594 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:49:48.701841 master-0 kubenswrapper[19715]: I0313 12:49:48.701806 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:49:48.701907 master-0 kubenswrapper[19715]: I0313 12:49:48.701836 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:49:48.701907 master-0 kubenswrapper[19715]: I0313 12:49:48.701870 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:49:48.701989 master-0 kubenswrapper[19715]: I0313 12:49:48.701912 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:49:48.702058 master-0 kubenswrapper[19715]: I0313 12:49:48.702024 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:49:48.703053 master-0 kubenswrapper[19715]: I0313 12:49:48.703015 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:49:48.704538 master-0 kubenswrapper[19715]: I0313 12:49:48.704495 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:49:48.704701 master-0 kubenswrapper[19715]: I0313 12:49:48.704667 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.704936 master-0 kubenswrapper[19715]: I0313 12:49:48.704906 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:49:48.705039 master-0 kubenswrapper[19715]: I0313 12:49:48.705008 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:49:48.705119 master-0 kubenswrapper[19715]: I0313 12:49:48.704910 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:49:48.705172 master-0 kubenswrapper[19715]: I0313 12:49:48.705150 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:49:48.705172 master-0 kubenswrapper[19715]: I0313 12:49:48.705167 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:49:48.705282 master-0 kubenswrapper[19715]: I0313 12:49:48.705263 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:49:48.705282 master-0 kubenswrapper[19715]: I0313 12:49:48.705279 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.705372 master-0 kubenswrapper[19715]: I0313 12:49:48.705351 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:49:48.705432 master-0 kubenswrapper[19715]: I0313 12:49:48.705424 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:49:48.705497 master-0 kubenswrapper[19715]: I0313 12:49:48.705459 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:49:48.705535 master-0 kubenswrapper[19715]: I0313 12:49:48.705506 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:49:48.705645 master-0 kubenswrapper[19715]: I0313 12:49:48.705620 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:49:48.705696 master-0 kubenswrapper[19715]: I0313 12:49:48.705674 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:49:48.705796 master-0 kubenswrapper[19715]: I0313 12:49:48.705776 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:49:48.705861 master-0 kubenswrapper[19715]: I0313 12:49:48.705796 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.705939 master-0 kubenswrapper[19715]: I0313 12:49:48.705916 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:49:48.706036 master-0 kubenswrapper[19715]: I0313 12:49:48.706014 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.706152 master-0 kubenswrapper[19715]: I0313 12:49:48.706131 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.706201 master-0 kubenswrapper[19715]: I0313 12:49:48.706142 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:49:48.706273 master-0 kubenswrapper[19715]: I0313 12:49:48.706249 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:49:48.706384 master-0 kubenswrapper[19715]: I0313 12:49:48.706360 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:49:48.706491 master-0 kubenswrapper[19715]: I0313 12:49:48.706405 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:49:48.706491 master-0 kubenswrapper[19715]: I0313 12:49:48.706428 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:49:48.706673 master-0 kubenswrapper[19715]: I0313 12:49:48.706653 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:49:48.706765 master-0 kubenswrapper[19715]: I0313 12:49:48.706736 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:49:48.706888 master-0 kubenswrapper[19715]: I0313 12:49:48.706868 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:49:48.706999 master-0 kubenswrapper[19715]: I0313 12:49:48.706982 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:49:48.707089 master-0 kubenswrapper[19715]: I0313 12:49:48.707072 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:49:48.707184 master-0 kubenswrapper[19715]: I0313 12:49:48.707167 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:49:48.707262 master-0 kubenswrapper[19715]: I0313 12:49:48.707245 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.707319 master-0 kubenswrapper[19715]: I0313 12:49:48.707287 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:49:48.707521 master-0 kubenswrapper[19715]: I0313 12:49:48.707497 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:49:48.707694 master-0 kubenswrapper[19715]: I0313 12:49:48.707674 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:49:48.707798 master-0 kubenswrapper[19715]: I0313 12:49:48.707778 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:49:48.708006 master-0 kubenswrapper[19715]: I0313 12:49:48.707975 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:49:48.708054 master-0 kubenswrapper[19715]: I0313 12:49:48.706655 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:49:48.708109 master-0 kubenswrapper[19715]: I0313 12:49:48.708044 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:49:48.708310 master-0 kubenswrapper[19715]: I0313 12:49:48.708291 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:49:48.708911 master-0 kubenswrapper[19715]: I0313 12:49:48.708738 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:49:48.709625 master-0 kubenswrapper[19715]: I0313 12:49:48.709509 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-l78bb" Mar 13 12:49:48.709625 master-0 kubenswrapper[19715]: I0313 12:49:48.709562 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:49:48.710220 master-0 kubenswrapper[19715]: I0313 12:49:48.710181 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:49:48.712663 master-0 kubenswrapper[19715]: I0313 12:49:48.712481 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:49:48.719130 master-0 kubenswrapper[19715]: I0313 12:49:48.719066 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:49:48.726767 master-0 kubenswrapper[19715]: I0313 12:49:48.726549 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:49:48.727553 master-0 kubenswrapper[19715]: I0313 12:49:48.727499 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:49:48.730539 master-0 kubenswrapper[19715]: I0313 12:49:48.730496 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:49:48.730720 master-0 kubenswrapper[19715]: I0313 12:49:48.730613 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:49:48.734384 master-0 kubenswrapper[19715]: I0313 12:49:48.731607 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:49:48.736289 master-0 kubenswrapper[19715]: I0313 12:49:48.736236 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jwq7f" Mar 13 12:49:48.736668 master-0 kubenswrapper[19715]: I0313 12:49:48.736620 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:49:48.745819 master-0 kubenswrapper[19715]: I0313 12:49:48.745753 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:49:48.747762 master-0 kubenswrapper[19715]: I0313 12:49:48.747710 19715 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 12:49:48.766953 master-0 kubenswrapper[19715]: I0313 12:49:48.766900 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:49:48.788667 master-0 kubenswrapper[19715]: I0313 12:49:48.788225 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:49:48.820048 master-0 kubenswrapper[19715]: I0313 12:49:48.819986 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:49:48.825828 master-0 kubenswrapper[19715]: I0313 12:49:48.825740 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:49:48.845696 master-0 kubenswrapper[19715]: I0313 12:49:48.845216 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:49:48.866484 master-0 kubenswrapper[19715]: I0313 12:49:48.866432 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:49:48.885238 master-0 kubenswrapper[19715]: I0313 12:49:48.885067 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:49:48.905806 master-0 kubenswrapper[19715]: I0313 12:49:48.905721 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:49:48.936421 master-0 kubenswrapper[19715]: I0313 12:49:48.934975 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9n5pq" Mar 13 12:49:48.948855 master-0 kubenswrapper[19715]: I0313 12:49:48.946694 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:49:48.965950 master-0 kubenswrapper[19715]: I0313 12:49:48.965878 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:49:48.986662 master-0 kubenswrapper[19715]: I0313 12:49:48.986460 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:49:49.029608 master-0 kubenswrapper[19715]: I0313 12:49:49.020627 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:49:49.047606 master-0 kubenswrapper[19715]: I0313 12:49:49.045016 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:49:49.062542 master-0 kubenswrapper[19715]: I0313 12:49:49.058709 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:49.066902 master-0 kubenswrapper[19715]: I0313 12:49:49.066848 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:49:49.069244 master-0 kubenswrapper[19715]: I0313 12:49:49.069174 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:49:49.085868 master-0 kubenswrapper[19715]: I0313 12:49:49.085817 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:49:49.107079 master-0 kubenswrapper[19715]: I0313 12:49:49.105756 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:49:49.113707 master-0 kubenswrapper[19715]: I0313 12:49:49.113662 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_80ceb0f9-67e4-4275-8532-85b6602367a2/installer/0.log" Mar 13 12:49:49.113937 master-0 kubenswrapper[19715]: I0313 12:49:49.113766 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:49.125749 master-0 kubenswrapper[19715]: I0313 12:49:49.125702 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:49:49.146292 master-0 kubenswrapper[19715]: I0313 12:49:49.146153 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:49:49.165638 master-0 kubenswrapper[19715]: I0313 12:49:49.165566 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7gls2" Mar 13 12:49:49.185100 master-0 kubenswrapper[19715]: I0313 12:49:49.185048 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:49:49.205758 master-0 kubenswrapper[19715]: I0313 12:49:49.205694 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:49:49.225110 master-0 kubenswrapper[19715]: I0313 12:49:49.224975 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:49:49.245370 master-0 kubenswrapper[19715]: I0313 12:49:49.245302 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:49:49.265100 master-0 kubenswrapper[19715]: I0313 12:49:49.265049 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:49:49.288043 master-0 kubenswrapper[19715]: I0313 12:49:49.287975 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:49:49.305744 master-0 kubenswrapper[19715]: I0313 12:49:49.305669 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:49:49.326161 master-0 kubenswrapper[19715]: I0313 12:49:49.326106 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:49:49.345672 master-0 kubenswrapper[19715]: I0313 12:49:49.345611 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:49:49.366051 master-0 kubenswrapper[19715]: I0313 12:49:49.365990 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:49:49.386916 master-0 kubenswrapper[19715]: I0313 12:49:49.386859 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:49:49.407151 master-0 kubenswrapper[19715]: I0313 12:49:49.407045 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:49:49.425596 master-0 kubenswrapper[19715]: I0313 12:49:49.425530 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:49:49.445680 master-0 kubenswrapper[19715]: I0313 12:49:49.445611 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:49:49.621796 master-0 kubenswrapper[19715]: I0313 12:49:49.621592 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:49:49.622484 master-0 kubenswrapper[19715]: I0313 12:49:49.621897 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fbzjs" Mar 13 12:49:49.623296 master-0 kubenswrapper[19715]: I0313 12:49:49.623260 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-2tphk" Mar 13 12:49:49.624249 master-0 kubenswrapper[19715]: I0313 12:49:49.623476 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4f2vw" Mar 13 12:49:49.624249 master-0 kubenswrapper[19715]: I0313 12:49:49.623654 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:49:49.624249 master-0 kubenswrapper[19715]: I0313 12:49:49.623817 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:49:49.624249 master-0 kubenswrapper[19715]: I0313 12:49:49.623931 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:49:49.624249 master-0 kubenswrapper[19715]: I0313 12:49:49.624031 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:49:49.628814 master-0 kubenswrapper[19715]: I0313 12:49:49.627348 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:49:49.634361 master-0 kubenswrapper[19715]: I0313 12:49:49.634305 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_80ceb0f9-67e4-4275-8532-85b6602367a2/installer/0.log" Mar 13 12:49:49.634737 master-0 kubenswrapper[19715]: I0313 12:49:49.634394 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"80ceb0f9-67e4-4275-8532-85b6602367a2","Type":"ContainerDied","Data":"dfda9ac962c72952dd338c0552968ea41c65cec9deb2da109d44fd46401c07be"} Mar 13 12:49:49.634737 master-0 kubenswrapper[19715]: I0313 12:49:49.634441 19715 scope.go:117] "RemoveContainer" containerID="c83ff937194332d291b1b5b800ca7831144c85fa708fce3eae5e12903a82439b" Mar 13 12:49:49.634737 master-0 kubenswrapper[19715]: I0313 12:49:49.634545 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:49.644607 master-0 kubenswrapper[19715]: I0313 12:49:49.642072 19715 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7" exitCode=1 Mar 13 12:49:49.644607 master-0 kubenswrapper[19715]: I0313 12:49:49.642160 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7"} Mar 13 12:49:49.644607 master-0 kubenswrapper[19715]: I0313 12:49:49.642629 19715 scope.go:117] "RemoveContainer" containerID="20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7" Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: I0313 12:49:49.649993 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: I0313 12:49:49.650775 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-check-endpoints/0.log" Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: I0313 12:49:49.651911 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" exitCode=255 Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: I0313 12:49:49.652092 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerDied","Data":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: I0313 12:49:49.652440 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:49.652885 master-0 kubenswrapper[19715]: E0313 12:49:49.652687 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:49.667045 master-0 kubenswrapper[19715]: I0313 12:49:49.666893 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:49:49.668128 master-0 kubenswrapper[19715]: I0313 12:49:49.668097 19715 scope.go:117] "RemoveContainer" containerID="47aafa637897db874d2f314c91d98220473a29ec5c1860c9183088400b424069" Mar 13 12:49:49.684834 master-0 kubenswrapper[19715]: I0313 12:49:49.684762 19715 request.go:700] Waited for 1.003857983s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-storage-version-migrator/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0 Mar 13 12:49:49.686787 master-0 kubenswrapper[19715]: I0313 12:49:49.686729 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:49:49.703949 master-0 kubenswrapper[19715]: I0313 12:49:49.703871 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 12:49:49.707338 master-0 kubenswrapper[19715]: I0313 12:49:49.707298 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:49:49.728607 master-0 kubenswrapper[19715]: I0313 12:49:49.728529 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:49:49.751051 master-0 kubenswrapper[19715]: I0313 12:49:49.750990 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:49:49.768631 master-0 kubenswrapper[19715]: I0313 12:49:49.766568 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:49:49.788821 master-0 kubenswrapper[19715]: I0313 12:49:49.788751 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kcbnp" Mar 13 12:49:49.811291 master-0 kubenswrapper[19715]: I0313 12:49:49.809989 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:49:49.832164 master-0 kubenswrapper[19715]: I0313 12:49:49.832077 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xg9t5" Mar 13 12:49:49.848629 master-0 kubenswrapper[19715]: I0313 12:49:49.845638 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:49:49.881014 master-0 kubenswrapper[19715]: I0313 12:49:49.880956 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:49:49.889418 master-0 kubenswrapper[19715]: I0313 12:49:49.889250 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:49:49.908649 master-0 kubenswrapper[19715]: I0313 12:49:49.906549 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:49:49.928661 master-0 kubenswrapper[19715]: I0313 12:49:49.928451 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gft2f" Mar 13 12:49:49.946170 master-0 kubenswrapper[19715]: I0313 12:49:49.946121 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:49:49.966709 master-0 kubenswrapper[19715]: I0313 12:49:49.966395 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:49:49.976036 master-0 kubenswrapper[19715]: I0313 12:49:49.975990 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:49.986197 master-0 kubenswrapper[19715]: I0313 12:49:49.986144 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:49:50.005330 master-0 kubenswrapper[19715]: I0313 12:49:50.005266 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:49:50.025082 master-0 kubenswrapper[19715]: I0313 12:49:50.025024 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-cjs56" Mar 13 12:49:50.045548 master-0 kubenswrapper[19715]: I0313 12:49:50.045494 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:49:50.065696 master-0 kubenswrapper[19715]: I0313 12:49:50.065623 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:49:50.086042 master-0 kubenswrapper[19715]: I0313 12:49:50.085988 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:49:50.105893 master-0 kubenswrapper[19715]: I0313 12:49:50.105832 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-5lcmq" Mar 13 12:49:50.125765 master-0 kubenswrapper[19715]: I0313 12:49:50.125713 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:49:50.146670 master-0 kubenswrapper[19715]: I0313 12:49:50.146608 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-46jst" Mar 13 12:49:50.190037 master-0 kubenswrapper[19715]: I0313 12:49:50.166177 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:49:50.190037 master-0 kubenswrapper[19715]: I0313 12:49:50.186074 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:49:50.205949 master-0 kubenswrapper[19715]: I0313 12:49:50.205870 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:49:50.334150 master-0 kubenswrapper[19715]: I0313 12:49:50.334101 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:49:50.334429 master-0 kubenswrapper[19715]: I0313 12:49:50.334386 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:49:50.337048 master-0 kubenswrapper[19715]: I0313 12:49:50.337004 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:49:50.337360 master-0 kubenswrapper[19715]: I0313 12:49:50.337328 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:49:50.337436 master-0 kubenswrapper[19715]: I0313 12:49:50.337416 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7fzhf" Mar 13 12:49:50.337878 master-0 kubenswrapper[19715]: I0313 12:49:50.337844 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:49:50.346315 master-0 kubenswrapper[19715]: I0313 12:49:50.346256 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:49:50.366221 master-0 kubenswrapper[19715]: I0313 12:49:50.366114 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:49:50.385803 master-0 kubenswrapper[19715]: I0313 12:49:50.385733 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:49:50.405844 master-0 kubenswrapper[19715]: I0313 12:49:50.405778 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:49:50.431556 master-0 kubenswrapper[19715]: I0313 12:49:50.431504 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:49:50.446534 master-0 kubenswrapper[19715]: I0313 12:49:50.446393 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:49:50.466011 master-0 kubenswrapper[19715]: I0313 12:49:50.465943 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7gz29" Mar 13 12:49:50.486088 master-0 kubenswrapper[19715]: I0313 12:49:50.486023 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:49:50.505599 master-0 kubenswrapper[19715]: I0313 12:49:50.505521 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:49:50.526540 master-0 kubenswrapper[19715]: I0313 12:49:50.526445 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:49:50.545804 master-0 kubenswrapper[19715]: I0313 12:49:50.545746 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:49:50.573243 master-0 kubenswrapper[19715]: I0313 12:49:50.573173 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:49:50.585852 master-0 kubenswrapper[19715]: I0313 12:49:50.585807 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:49:50.606164 master-0 kubenswrapper[19715]: I0313 12:49:50.606105 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:49:50.625524 master-0 kubenswrapper[19715]: I0313 12:49:50.625456 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:49:50.645250 master-0 kubenswrapper[19715]: I0313 12:49:50.645191 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6slw7" Mar 13 12:49:50.659544 master-0 kubenswrapper[19715]: I0313 12:49:50.659475 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:50.665774 master-0 kubenswrapper[19715]: I0313 12:49:50.665710 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:49:50.686117 master-0 kubenswrapper[19715]: I0313 12:49:50.686029 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:49:50.703704 master-0 kubenswrapper[19715]: I0313 12:49:50.703498 19715 request.go:700] Waited for 1.993408879s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-dns-operator/pods/dns-operator-589895fbb7-w7mv2 Mar 13 12:49:50.746053 master-0 kubenswrapper[19715]: I0313 12:49:50.745985 19715 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:49:50.792405 master-0 kubenswrapper[19715]: I0313 12:49:50.792001 19715 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 12:49:50.834365 master-0 kubenswrapper[19715]: I0313 12:49:50.834141 19715 kubelet_pods.go:1320] "Clean up containers for orphaned pod we had not seen before" podUID="5f77c8e18b751d90bc0dfe2d4e304050" killPodOptions="" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: E0313 12:49:50.836901 19715 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.141s" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.836956 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837005 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837018 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-retry-1-master-0" event={"ID":"bc244427-5e4e-441c-a04d-f93aeca9b7c1","Type":"ContainerDied","Data":"c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739"} Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837051 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2463d59212cd944dab4ea9d30f2cc50f1b57872c877b533a967a0558f9e8739" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837071 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837097 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"9eb4b2e62b81effa2b30fc9741ea362aa4ef66b19b64c96e124eb88cbf1ef364"} Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837115 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837128 19715 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="2b8c45bc-5067-46b9-81e7-094fb4025979" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837145 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.837156 19715 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="2b8c45bc-5067-46b9-81e7-094fb4025979" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: I0313 12:49:50.838146 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:49:50.840722 master-0 kubenswrapper[19715]: E0313 12:49:50.838454 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-controller-manager\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-controller-manager pod=bootstrap-kube-controller-manager-master-0_kube-system(f78c05e1499b533b83f091333d61f045)\"" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" Mar 13 12:49:50.866625 master-0 kubenswrapper[19715]: I0313 12:49:50.866569 19715 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 12:49:50.866757 master-0 kubenswrapper[19715]: I0313 12:49:50.866738 19715 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 12:49:50.871231 master-0 kubenswrapper[19715]: I0313 12:49:50.871204 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeNotReady" Mar 13 12:49:50.871402 master-0 kubenswrapper[19715]: I0313 12:49:50.871337 19715 setters.go:603] "Node became not ready" node="master-0" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-03-13T12:49:50Z","lastTransitionTime":"2026-03-13T12:49:50Z","reason":"KubeletNotReady","message":"CSINode is not yet initialized"} Mar 13 12:49:50.886370 master-0 kubenswrapper[19715]: I0313 12:49:50.886326 19715 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 13 12:49:50.886669 master-0 kubenswrapper[19715]: I0313 12:49:50.886616 19715 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 13 12:49:50.892986 master-0 kubenswrapper[19715]: I0313 12:49:50.892955 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access\") pod \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " Mar 13 12:49:50.893199 master-0 kubenswrapper[19715]: I0313 12:49:50.893180 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access\") pod \"80ceb0f9-67e4-4275-8532-85b6602367a2\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " Mar 13 12:49:50.893426 master-0 kubenswrapper[19715]: I0313 12:49:50.893406 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.893541 master-0 kubenswrapper[19715]: I0313 12:49:50.893518 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lm4d2\" (UniqueName: \"kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.893682 master-0 kubenswrapper[19715]: I0313 12:49:50.893661 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:50.893845 master-0 kubenswrapper[19715]: I0313 12:49:50.893824 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.894007 master-0 kubenswrapper[19715]: I0313 12:49:50.893986 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:50.894170 master-0 kubenswrapper[19715]: I0313 12:49:50.894146 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.894353 master-0 kubenswrapper[19715]: I0313 12:49:50.894272 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:50.894491 master-0 kubenswrapper[19715]: I0313 12:49:50.894465 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:50.894664 master-0 kubenswrapper[19715]: I0313 12:49:50.894633 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:49:50.894802 master-0 kubenswrapper[19715]: I0313 12:49:50.894783 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:50.894925 master-0 kubenswrapper[19715]: I0313 12:49:50.894907 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:50.895049 master-0 kubenswrapper[19715]: I0313 12:49:50.895029 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.895157 master-0 kubenswrapper[19715]: I0313 12:49:50.895139 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9tpt\" (UniqueName: \"kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:50.895278 master-0 kubenswrapper[19715]: I0313 12:49:50.895259 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.895396 master-0 kubenswrapper[19715]: I0313 12:49:50.895372 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:50.895510 master-0 kubenswrapper[19715]: I0313 12:49:50.895491 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.895692 master-0 kubenswrapper[19715]: I0313 12:49:50.895266 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-serving-cert\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.895756 master-0 kubenswrapper[19715]: I0313 12:49:50.894308 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/73dc5747-2d30-4a2d-a784-1dea1e10811d-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:50.895806 master-0 kubenswrapper[19715]: I0313 12:49:50.894416 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-catalog-content\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:50.895806 master-0 kubenswrapper[19715]: I0313 12:49:50.895609 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.895893 master-0 kubenswrapper[19715]: I0313 12:49:50.894732 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1929440f-f2cc-450d-80ff-ded6788baa74-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:50.895893 master-0 kubenswrapper[19715]: I0313 12:49:50.894005 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0d868028-9984-472a-8403-ffed767e1bf8-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:50.895893 master-0 kubenswrapper[19715]: I0313 12:49:50.895655 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtpqk\" (UniqueName: \"kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.895913 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.895927 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/54c7efc1-6d89-4831-89d6-6f2812c36c36-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.895946 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.895975 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.895997 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.896018 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.896040 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.896060 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.896172 master-0 kubenswrapper[19715]: I0313 12:49:50.896084 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896354 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/603fef71-e0cd-4617-bd8a-a55580578c2f-serving-cert\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896373 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896399 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cscxl\" (UniqueName: \"kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896435 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896430 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2a74c2a-8376-4998-bdc6-02a978f1f568-serving-cert\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896461 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.896570 master-0 kubenswrapper[19715]: I0313 12:49:50.896527 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/20217cff-2f81-4a56-9c15-28385c19258c-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896561 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896672 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c1213b50-28bf-43ff-94c4-20616907735b-trusted-ca\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896748 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896852 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-config-volume\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896750 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/1e9803a4-a166-42dc-9498-57e213602684-signing-key\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896875 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s7cgb\" (UniqueName: \"kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896914 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.896972 master-0 kubenswrapper[19715]: I0313 12:49:50.896974 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-env-overrides\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897063 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897103 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897131 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897157 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897186 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897223 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897251 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897269 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897293 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897312 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897333 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897351 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:50.897371 master-0 kubenswrapper[19715]: I0313 12:49:50.897370 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897388 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897406 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897425 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wwjh6\" (UniqueName: \"kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897443 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897459 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897479 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55v4q\" (UniqueName: \"kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897496 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897495 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897520 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:50.897822 master-0 kubenswrapper[19715]: I0313 12:49:50.897562 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14eb83e7-c436-4f10-8cba-29e09a7036a8-images\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.898229 master-0 kubenswrapper[19715]: I0313 12:49:50.897950 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.898229 master-0 kubenswrapper[19715]: I0313 12:49:50.897967 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e8d83309-58b2-40af-ab48-1f8b9aeffefb-proxy-tls\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.898229 master-0 kubenswrapper[19715]: I0313 12:49:50.897958 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-config\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.898229 master-0 kubenswrapper[19715]: I0313 12:49:50.898187 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898234 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898244 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-config\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898258 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.897981 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/cf580693-2931-4fef-adb5-b396f7303352-ovnkube-identity-cm\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898280 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898318 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/c1213b50-28bf-43ff-94c4-20616907735b-metrics-tls\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.898383 master-0 kubenswrapper[19715]: I0313 12:49:50.898333 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-serving-ca\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.898665 master-0 kubenswrapper[19715]: I0313 12:49:50.898537 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0d868028-9984-472a-8403-ffed767e1bf8-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:50.898665 master-0 kubenswrapper[19715]: I0313 12:49:50.898548 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64w7v\" (UniqueName: \"kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.898665 master-0 kubenswrapper[19715]: I0313 12:49:50.898606 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.898665 master-0 kubenswrapper[19715]: I0313 12:49:50.898627 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:50.898665 master-0 kubenswrapper[19715]: I0313 12:49:50.898643 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.898806 master-0 kubenswrapper[19715]: I0313 12:49:50.898674 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.899152 master-0 kubenswrapper[19715]: I0313 12:49:50.899112 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-whereabouts-configmap\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.899298 master-0 kubenswrapper[19715]: I0313 12:49:50.899241 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.899364 master-0 kubenswrapper[19715]: I0313 12:49:50.899248 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:50.899426 master-0 kubenswrapper[19715]: I0313 12:49:50.899379 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:49:50.899426 master-0 kubenswrapper[19715]: I0313 12:49:50.899409 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:50.899494 master-0 kubenswrapper[19715]: I0313 12:49:50.899419 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-images\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.899494 master-0 kubenswrapper[19715]: I0313 12:49:50.899441 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:50.899555 master-0 kubenswrapper[19715]: I0313 12:49:50.899520 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1929440f-f2cc-450d-80ff-ded6788baa74-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:50.899695 master-0 kubenswrapper[19715]: I0313 12:49:50.899667 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.899778 master-0 kubenswrapper[19715]: I0313 12:49:50.899747 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cf580693-2931-4fef-adb5-b396f7303352-webhook-cert\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.899812 master-0 kubenswrapper[19715]: I0313 12:49:50.899759 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6592aa5b-4a50-40f6-80a5-87e3c547018d-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:50.899812 master-0 kubenswrapper[19715]: I0313 12:49:50.899793 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.899959 master-0 kubenswrapper[19715]: I0313 12:49:50.899924 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/603fef71-e0cd-4617-bd8a-a55580578c2f-config\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.899959 master-0 kubenswrapper[19715]: I0313 12:49:50.899938 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.900023 master-0 kubenswrapper[19715]: I0313 12:49:50.899980 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:50.900023 master-0 kubenswrapper[19715]: I0313 12:49:50.899987 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.900023 master-0 kubenswrapper[19715]: I0313 12:49:50.899999 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.900119 master-0 kubenswrapper[19715]: I0313 12:49:50.900018 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/71b741d4-3899-4d31-afd1-72f5a9321f75-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:50.900119 master-0 kubenswrapper[19715]: I0313 12:49:50.900043 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/eda319d8-825a-4881-96a9-5386b87f8a4f-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.900248 master-0 kubenswrapper[19715]: I0313 12:49:50.900210 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:50.900289 master-0 kubenswrapper[19715]: I0313 12:49:50.900267 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.900362 master-0 kubenswrapper[19715]: I0313 12:49:50.900335 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/684c9067-189a-4f50-ac8d-97111aa73d9c-serving-cert\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:50.900647 master-0 kubenswrapper[19715]: I0313 12:49:50.900381 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.900647 master-0 kubenswrapper[19715]: I0313 12:49:50.900628 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.900769 master-0 kubenswrapper[19715]: I0313 12:49:50.900652 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.900769 master-0 kubenswrapper[19715]: I0313 12:49:50.900676 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:49:50.900769 master-0 kubenswrapper[19715]: I0313 12:49:50.900681 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b2ad4825-17fa-4ddd-b21e-334158f1c048-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:50.900769 master-0 kubenswrapper[19715]: I0313 12:49:50.900701 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w97j5\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.900921 master-0 kubenswrapper[19715]: I0313 12:49:50.900791 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.900921 master-0 kubenswrapper[19715]: I0313 12:49:50.900830 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/684c9067-189a-4f50-ac8d-97111aa73d9c-config\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:50.900980 master-0 kubenswrapper[19715]: I0313 12:49:50.900926 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djchk\" (UniqueName: \"kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:50.900980 master-0 kubenswrapper[19715]: I0313 12:49:50.900965 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.900980 master-0 kubenswrapper[19715]: I0313 12:49:50.900972 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/03758d96-5a20-4cba-92e0-47f5b1a3e558-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.901068 master-0 kubenswrapper[19715]: I0313 12:49:50.900992 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.901068 master-0 kubenswrapper[19715]: I0313 12:49:50.901023 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.901135 master-0 kubenswrapper[19715]: I0313 12:49:50.901028 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-config\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.901135 master-0 kubenswrapper[19715]: I0313 12:49:50.901103 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vt8r\" (UniqueName: \"kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:50.901194 master-0 kubenswrapper[19715]: I0313 12:49:50.901165 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:50.901263 master-0 kubenswrapper[19715]: I0313 12:49:50.901239 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-audit\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.901311 master-0 kubenswrapper[19715]: I0313 12:49:50.901261 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.901311 master-0 kubenswrapper[19715]: I0313 12:49:50.901296 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.901369 master-0 kubenswrapper[19715]: I0313 12:49:50.901326 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.901406 master-0 kubenswrapper[19715]: I0313 12:49:50.901362 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.901437 master-0 kubenswrapper[19715]: I0313 12:49:50.901395 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:49:50.901437 master-0 kubenswrapper[19715]: I0313 12:49:50.901408 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.901493 master-0 kubenswrapper[19715]: I0313 12:49:50.901459 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.901493 master-0 kubenswrapper[19715]: I0313 12:49:50.901486 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.901595 master-0 kubenswrapper[19715]: I0313 12:49:50.901511 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.901595 master-0 kubenswrapper[19715]: I0313 12:49:50.901535 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:50.901595 master-0 kubenswrapper[19715]: I0313 12:49:50.901554 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.901595 master-0 kubenswrapper[19715]: I0313 12:49:50.901592 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:50.901815 master-0 kubenswrapper[19715]: I0313 12:49:50.901630 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.901815 master-0 kubenswrapper[19715]: I0313 12:49:50.901671 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-config\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.901815 master-0 kubenswrapper[19715]: I0313 12:49:50.901721 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:50.901815 master-0 kubenswrapper[19715]: I0313 12:49:50.901754 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:49:50.901815 master-0 kubenswrapper[19715]: I0313 12:49:50.901808 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.902059 master-0 kubenswrapper[19715]: I0313 12:49:50.901906 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-encryption-config\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.902106 master-0 kubenswrapper[19715]: I0313 12:49:50.902074 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-apiservice-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.902106 master-0 kubenswrapper[19715]: I0313 12:49:50.902089 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:49:50.902194 master-0 kubenswrapper[19715]: I0313 12:49:50.902176 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-metrics-tls\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:50.902229 master-0 kubenswrapper[19715]: I0313 12:49:50.902216 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.902274 master-0 kubenswrapper[19715]: I0313 12:49:50.902254 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4m68d\" (UniqueName: \"kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.902307 master-0 kubenswrapper[19715]: I0313 12:49:50.902279 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.902348 master-0 kubenswrapper[19715]: I0313 12:49:50.902318 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.902379 master-0 kubenswrapper[19715]: I0313 12:49:50.902356 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.902379 master-0 kubenswrapper[19715]: I0313 12:49:50.902359 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.902435 master-0 kubenswrapper[19715]: I0313 12:49:50.902377 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:49:50.902435 master-0 kubenswrapper[19715]: I0313 12:49:50.902397 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.902725 master-0 kubenswrapper[19715]: I0313 12:49:50.902681 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-serving-cert\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.902725 master-0 kubenswrapper[19715]: I0313 12:49:50.902677 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:49:50.902725 master-0 kubenswrapper[19715]: I0313 12:49:50.902717 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.902859 master-0 kubenswrapper[19715]: I0313 12:49:50.902752 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:50.902859 master-0 kubenswrapper[19715]: I0313 12:49:50.902816 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.902859 master-0 kubenswrapper[19715]: I0313 12:49:50.902845 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.902956 master-0 kubenswrapper[19715]: I0313 12:49:50.902875 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.902956 master-0 kubenswrapper[19715]: I0313 12:49:50.902905 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:49:50.902956 master-0 kubenswrapper[19715]: I0313 12:49:50.902924 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/90c6474d-44a1-4164-a85b-6de0525dc656-webhook-cert\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.902956 master-0 kubenswrapper[19715]: I0313 12:49:50.902937 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.903107 master-0 kubenswrapper[19715]: I0313 12:49:50.902980 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.903107 master-0 kubenswrapper[19715]: I0313 12:49:50.903014 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.903107 master-0 kubenswrapper[19715]: I0313 12:49:50.903040 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:50.903107 master-0 kubenswrapper[19715]: I0313 12:49:50.903066 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-catalog-content\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:50.903107 master-0 kubenswrapper[19715]: I0313 12:49:50.903065 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903206 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-catalog-content\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903228 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f85ab8ab-f9f1-47ad-9c96-9498cef92474-metrics-tls\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903288 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903292 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-images\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903328 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wk9km\" (UniqueName: \"kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km\") pod \"migrator-57ccdf9b5-xt828\" (UID: \"d53c7e46-86e9-4328-9dfd-aec6deef5c01\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:49:50.903337 master-0 kubenswrapper[19715]: I0313 12:49:50.903336 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-client\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903363 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903453 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-image-import-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903456 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-audit-policies\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903527 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903554 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e0763043-3813-43b6-9618-b2d15c942edb-config\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903561 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903610 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903641 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.903676 master-0 kubenswrapper[19715]: I0313 12:49:50.903667 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903694 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903734 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovn-node-metrics-cert\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903733 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpjj6\" (UniqueName: \"kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903791 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903819 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903836 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-snapshots\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903851 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903843 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903950 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.903988 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.904017 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.904042 master-0 kubenswrapper[19715]: I0313 12:49:50.904030 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b2ad4825-17fa-4ddd-b21e-334158f1c048-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904057 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wqpz\" (UniqueName: \"kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz\") pod \"csi-snapshot-controller-7577d6f48-lf2dh\" (UID: \"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904089 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904088 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904090 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-catalog-content\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904115 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904176 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904203 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904226 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-encryption-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904229 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg5p4\" (UniqueName: \"kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904260 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.904799 master-0 kubenswrapper[19715]: I0313 12:49:50.904518 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/1e9803a4-a166-42dc-9498-57e213602684-signing-cabundle\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:50.905622 master-0 kubenswrapper[19715]: I0313 12:49:50.905570 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.905676 master-0 kubenswrapper[19715]: I0313 12:49:50.905643 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.905756 master-0 kubenswrapper[19715]: I0313 12:49:50.905735 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.905817 master-0 kubenswrapper[19715]: I0313 12:49:50.905775 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.905817 master-0 kubenswrapper[19715]: I0313 12:49:50.905803 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:50.905904 master-0 kubenswrapper[19715]: I0313 12:49:50.905860 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.905904 master-0 kubenswrapper[19715]: I0313 12:49:50.905885 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.906100 master-0 kubenswrapper[19715]: I0313 12:49:50.905901 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:50.906165 master-0 kubenswrapper[19715]: I0313 12:49:50.906108 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:50.906165 master-0 kubenswrapper[19715]: I0313 12:49:50.906153 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.906224 master-0 kubenswrapper[19715]: I0313 12:49:50.906174 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.906270 master-0 kubenswrapper[19715]: I0313 12:49:50.906195 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.906310 master-0 kubenswrapper[19715]: I0313 12:49:50.906284 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:50.906310 master-0 kubenswrapper[19715]: I0313 12:49:50.906306 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.906378 master-0 kubenswrapper[19715]: I0313 12:49:50.906308 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "bc244427-5e4e-441c-a04d-f93aeca9b7c1" (UID: "bc244427-5e4e-441c-a04d-f93aeca9b7c1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:49:50.906378 master-0 kubenswrapper[19715]: I0313 12:49:50.906326 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.906378 master-0 kubenswrapper[19715]: I0313 12:49:50.906371 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:50.906461 master-0 kubenswrapper[19715]: I0313 12:49:50.906392 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.906461 master-0 kubenswrapper[19715]: I0313 12:49:50.906410 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.906461 master-0 kubenswrapper[19715]: I0313 12:49:50.906427 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.906461 master-0 kubenswrapper[19715]: I0313 12:49:50.906448 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2894g\" (UniqueName: \"kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906464 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906483 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906500 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906527 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cv745\" (UniqueName: \"kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906559 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906607 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:50.906618 master-0 kubenswrapper[19715]: I0313 12:49:50.906614 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-utilities\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:50.906940 master-0 kubenswrapper[19715]: I0313 12:49:50.906634 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.907159 master-0 kubenswrapper[19715]: I0313 12:49:50.907120 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/edde8919-104a-4f05-8e21-46787f706bed-serving-cert\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:50.907273 master-0 kubenswrapper[19715]: I0313 12:49:50.907198 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.907273 master-0 kubenswrapper[19715]: I0313 12:49:50.906530 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cni-binary-copy\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.907364 master-0 kubenswrapper[19715]: I0313 12:49:50.907287 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:50.907364 master-0 kubenswrapper[19715]: I0313 12:49:50.907327 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:50.907440 master-0 kubenswrapper[19715]: I0313 12:49:50.907349 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.907440 master-0 kubenswrapper[19715]: I0313 12:49:50.907425 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.907539 master-0 kubenswrapper[19715]: I0313 12:49:50.907448 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.907539 master-0 kubenswrapper[19715]: I0313 12:49:50.907456 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/73dc5747-2d30-4a2d-a784-1dea1e10811d-config\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:50.907539 master-0 kubenswrapper[19715]: I0313 12:49:50.907464 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/2a5976df-0366-47b3-bc54-1ba7c249e87c-srv-cert\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:50.907701 master-0 kubenswrapper[19715]: I0313 12:49:50.907634 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cf9f90f5-643f-41e8-a886-7d19fb064afc-utilities\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:50.907701 master-0 kubenswrapper[19715]: I0313 12:49:50.907467 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:49:50.907813 master-0 kubenswrapper[19715]: I0313 12:49:50.907727 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-metrics-tls\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:50.907813 master-0 kubenswrapper[19715]: I0313 12:49:50.907744 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.907813 master-0 kubenswrapper[19715]: I0313 12:49:50.907779 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907814 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907833 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-env-overrides\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907844 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvn5d\" (UniqueName: \"kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907850 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-etcd-client\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907876 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907888 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-service-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907908 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vqww\" (UniqueName: \"kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:50.907956 master-0 kubenswrapper[19715]: I0313 12:49:50.907945 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.907976 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.907913 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/730e1f43-39b7-41de-ac81-270966725477-utilities\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.907977 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908007 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908018 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908049 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908085 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908110 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908137 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908170 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908198 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.908286 master-0 kubenswrapper[19715]: I0313 12:49:50.908219 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908306 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908326 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5623ea13-a34b-4510-8902-341912d115df-utilities\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908339 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908265 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908273 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908370 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908416 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908454 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6592aa5b-4a50-40f6-80a5-87e3c547018d-cert\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908464 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908460 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908524 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908560 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908617 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908614 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/2b5ab386-14ed-4610-a08a-54b6de877603-iptables-alerter-script\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908760 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908783 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908794 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908876 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-serving-ca\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908883 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908935 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908961 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.908991 master-0 kubenswrapper[19715]: I0313 12:49:50.908990 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909017 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909031 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/03758d96-5a20-4cba-92e0-47f5b1a3e558-config\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909037 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1ad68c2d-762a-47ed-bd56-e823a83b9087-ovnkube-script-lib\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909068 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8226ffac-1f76-4eaa-ada5-056b5fd031b4-srv-cert\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909077 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909191 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909224 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909251 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909276 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909299 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-tmp\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909317 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-serving-cert\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909337 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909499 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/16c2d774-967f-4964-ab4e-eb13c4364f63-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909494 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909544 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909546 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909564 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909597 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909621 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vgd4v\" (UniqueName: \"kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909606 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909645 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909679 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909701 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909721 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909747 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0943b2db-9658-4a8d-89da-00779d55db6e-trusted-ca-bundle\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909750 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909802 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909818 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/dc1c9136-80e1-4736-8959-cc1e58aee26e-service-ca\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909831 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909809 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/58581675-62f2-4564-9e12-bf34551b96ac-etc-tuned\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909856 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909892 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909908 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e0763043-3813-43b6-9618-b2d15c942edb-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909924 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.909958 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.910032 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.910046 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.910058 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.910078 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.910054 master-0 kubenswrapper[19715]: I0313 12:49:50.910082 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/54c7efc1-6d89-4831-89d6-6f2812c36c36-operand-assets\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910123 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7574e950-de2e-4f90-99d0-eae3b45cd900-etcd-client\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910211 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910220 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dc1c9136-80e1-4736-8959-cc1e58aee26e-serving-cert\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910248 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910254 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/14eb83e7-c436-4f10-8cba-29e09a7036a8-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910300 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0943b2db-9658-4a8d-89da-00779d55db6e-serving-cert\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910300 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910339 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910348 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/90c6474d-44a1-4164-a85b-6de0525dc656-tmpfs\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910359 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910360 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910381 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/16c2d774-967f-4964-ab4e-eb13c4364f63-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910393 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910431 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910458 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910491 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910518 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910543 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910626 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910519 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-trusted-ca-bundle\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910670 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910642 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-daemon-config\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910701 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910739 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910773 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/edde8919-104a-4f05-8e21-46787f706bed-available-featuregates\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910774 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910815 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910834 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910852 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910849 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6d1a0616-4479-4621-b042-36a586bd8248-cni-binary-copy\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910921 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/59c9773d-7e88-4e30-9b8a-792a869a860e-metrics-certs\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910953 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.910957 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqhcp\" (UniqueName: \"kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911007 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911041 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hflng\" (UniqueName: \"kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911045 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/6e55908e-59f3-45a2-82aa-2616c5a2fd52-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911064 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911085 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911127 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911179 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911201 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pr995\" (UniqueName: \"kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911211 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7574e950-de2e-4f90-99d0-eae3b45cd900-config\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911229 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911250 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911343 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/6e4e773c-d970-4f5e-9172-c1ebdb41888d-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911371 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/31442e1e-3f42-4dba-82d5-08e5f8d29a58-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911480 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e8d83309-58b2-40af-ab48-1f8b9aeffefb-mcd-auth-proxy-config\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:50.911563 master-0 kubenswrapper[19715]: I0313 12:49:50.911495 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:49:50.913214 master-0 kubenswrapper[19715]: I0313 12:49:50.911659 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:50.913214 master-0 kubenswrapper[19715]: I0313 12:49:50.911573 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/71b741d4-3899-4d31-afd1-72f5a9321f75-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:50.913214 master-0 kubenswrapper[19715]: I0313 12:49:50.911844 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2a74c2a-8376-4998-bdc6-02a978f1f568-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:50.921974 master-0 kubenswrapper[19715]: I0313 12:49:50.921892 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "80ceb0f9-67e4-4275-8532-85b6602367a2" (UID: "80ceb0f9-67e4-4275-8532-85b6602367a2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:49:50.936387 master-0 kubenswrapper[19715]: I0313 12:49:50.936336 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x27d2\" (UniqueName: \"kubernetes.io/projected/5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0-kube-api-access-x27d2\") pod \"ovnkube-control-plane-66b55d57d-dhtgf\" (UID: \"5fb9acd0-a021-40ca-bfaa-d1bfd2932ca0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-dhtgf" Mar 13 12:49:50.958614 master-0 kubenswrapper[19715]: I0313 12:49:50.958484 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lm4d2\" (UniqueName: \"kubernetes.io/projected/31442e1e-3f42-4dba-82d5-08e5f8d29a58-kube-api-access-lm4d2\") pod \"cloud-credential-operator-55d85b7b47-cb577\" (UID: \"31442e1e-3f42-4dba-82d5-08e5f8d29a58\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-cb577" Mar 13 12:49:50.978698 master-0 kubenswrapper[19715]: I0313 12:49:50.978650 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rspzx\" (UniqueName: \"kubernetes.io/projected/603fef71-e0cd-4617-bd8a-a55580578c2f-kube-api-access-rspzx\") pod \"service-ca-operator-69b6fc6b88-2d882\" (UID: \"603fef71-e0cd-4617-bd8a-a55580578c2f\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-2d882" Mar 13 12:49:50.997762 master-0 kubenswrapper[19715]: I0313 12:49:50.997700 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n8sb\" (UniqueName: \"kubernetes.io/projected/a6a45be0-19ef-4d36-b8a7-eb2705d24bfa-kube-api-access-9n8sb\") pod \"csi-snapshot-controller-operator-5685fbc7d-77b2h\" (UID: \"a6a45be0-19ef-4d36-b8a7-eb2705d24bfa\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-77b2h" Mar 13 12:49:51.012631 master-0 kubenswrapper[19715]: I0313 12:49:51.012553 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:51.012631 master-0 kubenswrapper[19715]: I0313 12:49:51.012621 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012645 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012663 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012681 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012689 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012733 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-socket-dir-parent\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012749 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012761 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-bin\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012782 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012788 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-audit-dir\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012804 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012810 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012830 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:51.012828 master-0 kubenswrapper[19715]: I0313 12:49:51.012839 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-k8s-cni-cncf-io\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012868 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012891 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012931 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012935 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-host\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012948 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012966 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012968 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-sys\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.012985 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-slash\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.013003 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.013013 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-system-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.013034 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-hostroot\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.013034 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013184 master-0 kubenswrapper[19715]: I0313 12:49:51.013051 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-log-socket\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013560 master-0 kubenswrapper[19715]: I0313 12:49:51.013250 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013649 master-0 kubenswrapper[19715]: I0313 12:49:51.013073 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-kubelet\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013689 master-0 kubenswrapper[19715]: I0313 12:49:51.013664 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:51.013725 master-0 kubenswrapper[19715]: I0313 12:49:51.013710 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.013755 master-0 kubenswrapper[19715]: I0313 12:49:51.013738 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013803 master-0 kubenswrapper[19715]: I0313 12:49:51.013764 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013803 master-0 kubenswrapper[19715]: I0313 12:49:51.013786 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.013861 master-0 kubenswrapper[19715]: I0313 12:49:51.013817 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.013861 master-0 kubenswrapper[19715]: I0313 12:49:51.013841 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:51.013923 master-0 kubenswrapper[19715]: I0313 12:49:51.013875 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.013923 master-0 kubenswrapper[19715]: I0313 12:49:51.013899 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.013975 master-0 kubenswrapper[19715]: I0313 12:49:51.013934 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.013975 master-0 kubenswrapper[19715]: I0313 12:49:51.013967 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014031 master-0 kubenswrapper[19715]: I0313 12:49:51.013991 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014031 master-0 kubenswrapper[19715]: I0313 12:49:51.014013 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014102 master-0 kubenswrapper[19715]: I0313 12:49:51.014038 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014102 master-0 kubenswrapper[19715]: I0313 12:49:51.014061 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014160 master-0 kubenswrapper[19715]: I0313 12:49:51.014105 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014160 master-0 kubenswrapper[19715]: I0313 12:49:51.014137 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:51.014160 master-0 kubenswrapper[19715]: I0313 12:49:51.014156 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014240 master-0 kubenswrapper[19715]: I0313 12:49:51.014208 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:51.014271 master-0 kubenswrapper[19715]: I0313 12:49:51.014236 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.014271 master-0 kubenswrapper[19715]: I0313 12:49:51.014260 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.014332 master-0 kubenswrapper[19715]: I0313 12:49:51.014298 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014362 master-0 kubenswrapper[19715]: I0313 12:49:51.014342 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.014447 master-0 kubenswrapper[19715]: I0313 12:49:51.014419 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.014486 master-0 kubenswrapper[19715]: I0313 12:49:51.014479 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014516 master-0 kubenswrapper[19715]: I0313 12:49:51.014497 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:51.014546 master-0 kubenswrapper[19715]: I0313 12:49:51.014513 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.014546 master-0 kubenswrapper[19715]: I0313 12:49:51.014528 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.014613 master-0 kubenswrapper[19715]: I0313 12:49:51.014555 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014613 master-0 kubenswrapper[19715]: I0313 12:49:51.014588 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014694 master-0 kubenswrapper[19715]: I0313 12:49:51.014630 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.014694 master-0 kubenswrapper[19715]: I0313 12:49:51.014658 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014694 master-0 kubenswrapper[19715]: I0313 12:49:51.014683 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014780 master-0 kubenswrapper[19715]: I0313 12:49:51.014707 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.014780 master-0 kubenswrapper[19715]: I0313 12:49:51.014722 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.014780 master-0 kubenswrapper[19715]: I0313 12:49:51.014736 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014780 master-0 kubenswrapper[19715]: I0313 12:49:51.014754 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:51.014780 master-0 kubenswrapper[19715]: I0313 12:49:51.014777 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014792 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014816 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014832 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014847 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014863 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.014919 master-0 kubenswrapper[19715]: I0313 12:49:51.014903 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.014928 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.014945 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.014961 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.014978 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.015040 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/80ceb0f9-67e4-4275-8532-85b6602367a2-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.015073 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/2b5ab386-14ed-4610-a08a-54b6de877603-host-slash\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:51.015111 master-0 kubenswrapper[19715]: I0313 12:49:51.015100 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/dc1c9136-80e1-4736-8959-cc1e58aee26e-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:51.015312 master-0 kubenswrapper[19715]: I0313 12:49:51.015121 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-system-cni-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.015312 master-0 kubenswrapper[19715]: I0313 12:49:51.015168 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-os-release\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.015312 master-0 kubenswrapper[19715]: I0313 12:49:51.015190 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-multus-certs\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.015392 master-0 kubenswrapper[19715]: I0313 12:49:51.015345 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-conf\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.015392 master-0 kubenswrapper[19715]: I0313 12:49:51.015378 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.015448 master-0 kubenswrapper[19715]: I0313 12:49:51.015401 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:51.015448 master-0 kubenswrapper[19715]: I0313 12:49:51.015424 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-systemd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.015551 master-0 kubenswrapper[19715]: I0313 12:49:51.015521 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-modprobe-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.015612 master-0 kubenswrapper[19715]: I0313 12:49:51.015596 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-cni-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.015658 master-0 kubenswrapper[19715]: I0313 12:49:51.015635 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-node-log\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.015718 master-0 kubenswrapper[19715]: I0313 12:49:51.015698 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-kubernetes\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.015827 master-0 kubenswrapper[19715]: I0313 12:49:51.015744 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysctl-d\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.015827 master-0 kubenswrapper[19715]: I0313 12:49:51.015769 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-kubelet\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.015827 master-0 kubenswrapper[19715]: I0313 12:49:51.015791 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-etc-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.015931 master-0 kubenswrapper[19715]: I0313 12:49:51.015875 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-lib-modules\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.015931 master-0 kubenswrapper[19715]: I0313 12:49:51.015903 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/e8d83309-58b2-40af-ab48-1f8b9aeffefb-rootfs\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:51.015931 master-0 kubenswrapper[19715]: I0313 12:49:51.015924 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-ovn\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016006 master-0 kubenswrapper[19715]: I0313 12:49:51.015947 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/0943b2db-9658-4a8d-89da-00779d55db6e-audit-dir\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:51.016040 master-0 kubenswrapper[19715]: I0313 12:49:51.016013 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-tuning-conf-dir\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.016069 master-0 kubenswrapper[19715]: I0313 12:49:51.016054 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-bin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.016099 master-0 kubenswrapper[19715]: I0313 12:49:51.016077 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-var-lib-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016164 master-0 kubenswrapper[19715]: I0313 12:49:51.016146 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.016196 master-0 kubenswrapper[19715]: I0313 12:49:51.016183 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016212 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-systemd\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016235 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016255 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-cnibin\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016276 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-etc-kubernetes\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016297 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-run-netns\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016317 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-cni-netd\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016343 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/eda319d8-825a-4881-96a9-5386b87f8a4f-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.016367 master-0 kubenswrapper[19715]: I0313 12:49:51.016367 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016389 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-run-openvswitch\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016423 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-cnibin\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016445 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-run-netns\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016482 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-etc-sysconfig\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016504 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016527 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-host-var-lib-cni-multus\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016562 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6d1a0616-4479-4621-b042-36a586bd8248-os-release\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016619 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1ad68c2d-762a-47ed-bd56-e823a83b9087-systemd-units\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.016664 master-0 kubenswrapper[19715]: I0313 12:49:51.016660 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-host-etc-kube\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:51.016993 master-0 kubenswrapper[19715]: I0313 12:49:51.016688 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-run\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.016993 master-0 kubenswrapper[19715]: I0313 12:49:51.016724 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/58581675-62f2-4564-9e12-bf34551b96ac-var-lib-kubelet\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.016993 master-0 kubenswrapper[19715]: I0313 12:49:51.016772 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7574e950-de2e-4f90-99d0-eae3b45cd900-node-pullsecrets\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.016993 master-0 kubenswrapper[19715]: I0313 12:49:51.016941 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f726d662-90e1-45b9-9bba-76a9c03faced-hosts-file\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:49:51.017109 master-0 kubenswrapper[19715]: I0313 12:49:51.017059 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffcc3a23-d81c-4064-a24a-857dbe3222c8-multus-conf-dir\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.017153 master-0 kubenswrapper[19715]: I0313 12:49:51.017125 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"installer-1-retry-1-master-0\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " pod="openshift-kube-apiserver/installer-1-retry-1-master-0" Mar 13 12:49:51.018684 master-0 kubenswrapper[19715]: I0313 12:49:51.018649 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jknp\" (UniqueName: \"kubernetes.io/projected/3b1777e4-6833-4b68-8cdf-ea8b36dbeae9-kube-api-access-5jknp\") pod \"network-operator-7c649bf6d4-fcthv\" (UID: \"3b1777e4-6833-4b68-8cdf-ea8b36dbeae9\") " pod="openshift-network-operator/network-operator-7c649bf6d4-fcthv" Mar 13 12:49:51.038076 master-0 kubenswrapper[19715]: I0313 12:49:51.038019 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9tpt\" (UniqueName: \"kubernetes.io/projected/5623ea13-a34b-4510-8902-341912d115df-kube-api-access-q9tpt\") pod \"redhat-operators-28fdg\" (UID: \"5623ea13-a34b-4510-8902-341912d115df\") " pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:51.063569 master-0 kubenswrapper[19715]: I0313 12:49:51.063356 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h5ht\" (UniqueName: \"kubernetes.io/projected/71b741d4-3899-4d31-afd1-72f5a9321f75-kube-api-access-2h5ht\") pod \"cluster-monitoring-operator-674cbfbd9d-4jlnk\" (UID: \"71b741d4-3899-4d31-afd1-72f5a9321f75\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-4jlnk" Mar 13 12:49:51.077850 master-0 kubenswrapper[19715]: I0313 12:49:51.077694 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cscxl\" (UniqueName: \"kubernetes.io/projected/5ae6e46f-a465-46e6-bc27-d13fc6f90d8c-kube-api-access-cscxl\") pod \"cluster-samples-operator-664cb58b85-78swz\" (UID: \"5ae6e46f-a465-46e6-bc27-d13fc6f90d8c\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-78swz" Mar 13 12:49:51.085007 master-0 kubenswrapper[19715]: I0313 12:49:51.084931 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:51.085913 master-0 kubenswrapper[19715]: I0313 12:49:51.085038 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:51.096994 master-0 kubenswrapper[19715]: I0313 12:49:51.096933 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dc1c9136-80e1-4736-8959-cc1e58aee26e-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-rkg56\" (UID: \"dc1c9136-80e1-4736-8959-cc1e58aee26e\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-rkg56" Mar 13 12:49:51.116276 master-0 kubenswrapper[19715]: I0313 12:49:51.116184 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") pod \"80ceb0f9-67e4-4275-8532-85b6602367a2\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " Mar 13 12:49:51.116276 master-0 kubenswrapper[19715]: I0313 12:49:51.116304 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") pod \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " Mar 13 12:49:51.116614 master-0 kubenswrapper[19715]: I0313 12:49:51.116341 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") pod \"80ceb0f9-67e4-4275-8532-85b6602367a2\" (UID: \"80ceb0f9-67e4-4275-8532-85b6602367a2\") " Mar 13 12:49:51.116614 master-0 kubenswrapper[19715]: I0313 12:49:51.116359 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") pod \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\" (UID: \"bc244427-5e4e-441c-a04d-f93aeca9b7c1\") " Mar 13 12:49:51.117144 master-0 kubenswrapper[19715]: I0313 12:49:51.117097 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock" (OuterVolumeSpecName: "var-lock") pod "bc244427-5e4e-441c-a04d-f93aeca9b7c1" (UID: "bc244427-5e4e-441c-a04d-f93aeca9b7c1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:49:51.117219 master-0 kubenswrapper[19715]: I0313 12:49:51.117170 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "80ceb0f9-67e4-4275-8532-85b6602367a2" (UID: "80ceb0f9-67e4-4275-8532-85b6602367a2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:49:51.117219 master-0 kubenswrapper[19715]: I0313 12:49:51.117196 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "bc244427-5e4e-441c-a04d-f93aeca9b7c1" (UID: "bc244427-5e4e-441c-a04d-f93aeca9b7c1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:49:51.117344 master-0 kubenswrapper[19715]: I0313 12:49:51.117234 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock" (OuterVolumeSpecName: "var-lock") pod "80ceb0f9-67e4-4275-8532-85b6602367a2" (UID: "80ceb0f9-67e4-4275-8532-85b6602367a2"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:49:51.119334 master-0 kubenswrapper[19715]: I0313 12:49:51.119289 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s7cgb\" (UniqueName: \"kubernetes.io/projected/6592aa5b-4a50-40f6-80a5-87e3c547018d-kube-api-access-s7cgb\") pod \"cluster-autoscaler-operator-69576476f7-94zs2\" (UID: \"6592aa5b-4a50-40f6-80a5-87e3c547018d\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-94zs2" Mar 13 12:49:51.127157 master-0 kubenswrapper[19715]: I0313 12:49:51.127091 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:49:51.138297 master-0 kubenswrapper[19715]: I0313 12:49:51.138232 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wwjh6\" (UniqueName: \"kubernetes.io/projected/90c6474d-44a1-4164-a85b-6de0525dc656-kube-api-access-wwjh6\") pod \"packageserver-5d9d8b6575-fk9v2\" (UID: \"90c6474d-44a1-4164-a85b-6de0525dc656\") " pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:51.157830 master-0 kubenswrapper[19715]: I0313 12:49:51.157763 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1929440f-f2cc-450d-80ff-ded6788baa74-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-gntkj\" (UID: \"1929440f-f2cc-450d-80ff-ded6788baa74\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-gntkj" Mar 13 12:49:51.180060 master-0 kubenswrapper[19715]: I0313 12:49:51.180000 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqxjz\" (UniqueName: \"kubernetes.io/projected/2b5ab386-14ed-4610-a08a-54b6de877603-kube-api-access-nqxjz\") pod \"iptables-alerter-456r5\" (UID: \"2b5ab386-14ed-4610-a08a-54b6de877603\") " pod="openshift-network-operator/iptables-alerter-456r5" Mar 13 12:49:51.198778 master-0 kubenswrapper[19715]: I0313 12:49:51.198727 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55v4q\" (UniqueName: \"kubernetes.io/projected/03758d96-5a20-4cba-92e0-47f5b1a3e558-kube-api-access-55v4q\") pod \"machine-api-operator-84bf6db4f9-zthfh\" (UID: \"03758d96-5a20-4cba-92e0-47f5b1a3e558\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-zthfh" Mar 13 12:49:51.218012 master-0 kubenswrapper[19715]: I0313 12:49:51.217883 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/684c9067-189a-4f50-ac8d-97111aa73d9c-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-hsrbc\" (UID: \"684c9067-189a-4f50-ac8d-97111aa73d9c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-hsrbc" Mar 13 12:49:51.218826 master-0 kubenswrapper[19715]: I0313 12:49:51.218795 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:51.218826 master-0 kubenswrapper[19715]: I0313 12:49:51.218821 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:51.218922 master-0 kubenswrapper[19715]: I0313 12:49:51.218836 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/80ceb0f9-67e4-4275-8532-85b6602367a2-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:51.218922 master-0 kubenswrapper[19715]: I0313 12:49:51.218849 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/bc244427-5e4e-441c-a04d-f93aeca9b7c1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:49:51.237689 master-0 kubenswrapper[19715]: I0313 12:49:51.237609 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64w7v\" (UniqueName: \"kubernetes.io/projected/58581675-62f2-4564-9e12-bf34551b96ac-kube-api-access-64w7v\") pod \"tuned-d7h2t\" (UID: \"58581675-62f2-4564-9e12-bf34551b96ac\") " pod="openshift-cluster-node-tuning-operator/tuned-d7h2t" Mar 13 12:49:51.257111 master-0 kubenswrapper[19715]: I0313 12:49:51.257024 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qttkt\" (UniqueName: \"kubernetes.io/projected/54c7efc1-6d89-4831-89d6-6f2812c36c36-kube-api-access-qttkt\") pod \"cluster-olm-operator-77899cf6d-zt57b\" (UID: \"54c7efc1-6d89-4831-89d6-6f2812c36c36\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-zt57b" Mar 13 12:49:51.277541 master-0 kubenswrapper[19715]: I0313 12:49:51.277470 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vp6bn\" (UniqueName: \"kubernetes.io/projected/59c9773d-7e88-4e30-9b8a-792a869a860e-kube-api-access-vp6bn\") pod \"network-metrics-daemon-ztpxf\" (UID: \"59c9773d-7e88-4e30-9b8a-792a869a860e\") " pod="openshift-multus/network-metrics-daemon-ztpxf" Mar 13 12:49:51.297330 master-0 kubenswrapper[19715]: I0313 12:49:51.297260 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtpqk\" (UniqueName: \"kubernetes.io/projected/8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5-kube-api-access-qtpqk\") pod \"dns-default-qh2tf\" (UID: \"8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5\") " pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:51.318284 master-0 kubenswrapper[19715]: I0313 12:49:51.318191 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqm5h\" (UniqueName: \"kubernetes.io/projected/5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346-kube-api-access-pqm5h\") pod \"cluster-node-tuning-operator-66c7586884-mwnxf\" (UID: \"5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-mwnxf" Mar 13 12:49:51.394772 master-0 kubenswrapper[19715]: I0313 12:49:51.392625 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:51.394772 master-0 kubenswrapper[19715]: I0313 12:49:51.394552 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:51.400625 master-0 kubenswrapper[19715]: I0313 12:49:51.398206 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-qh2tf" Mar 13 12:49:51.400625 master-0 kubenswrapper[19715]: I0313 12:49:51.398392 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-5d9d8b6575-fk9v2" Mar 13 12:49:51.408281 master-0 kubenswrapper[19715]: I0313 12:49:51.408212 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djchk\" (UniqueName: \"kubernetes.io/projected/b6a9184d-0557-4e61-bf31-6dd69c0dfb15-kube-api-access-djchk\") pod \"community-operators-6w8hd\" (UID: \"b6a9184d-0557-4e61-bf31-6dd69c0dfb15\") " pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:51.409770 master-0 kubenswrapper[19715]: I0313 12:49:51.409713 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:51.410643 master-0 kubenswrapper[19715]: I0313 12:49:51.410273 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w97j5\" (UniqueName: \"kubernetes.io/projected/a8c840d1-8047-4ad6-a990-3ab119ae1cc5-kube-api-access-w97j5\") pod \"catalogd-controller-manager-7f8b8b6f4c-lwxxn\" (UID: \"a8c840d1-8047-4ad6-a990-3ab119ae1cc5\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.411743 master-0 kubenswrapper[19715]: I0313 12:49:51.411709 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tnbf9\" (UniqueName: \"kubernetes.io/projected/b2ad4825-17fa-4ddd-b21e-334158f1c048-kube-api-access-tnbf9\") pod \"kube-storage-version-migrator-operator-7f65c457f5-tmc5z\" (UID: \"b2ad4825-17fa-4ddd-b21e-334158f1c048\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-tmc5z" Mar 13 12:49:51.421344 master-0 kubenswrapper[19715]: I0313 12:49:51.421301 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vt8r\" (UniqueName: \"kubernetes.io/projected/730e1f43-39b7-41de-ac81-270966725477-kube-api-access-2vt8r\") pod \"redhat-marketplace-92rsn\" (UID: \"730e1f43-39b7-41de-ac81-270966725477\") " pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:51.437250 master-0 kubenswrapper[19715]: I0313 12:49:51.437174 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4m68d\" (UniqueName: \"kubernetes.io/projected/e8d83309-58b2-40af-ab48-1f8b9aeffefb-kube-api-access-4m68d\") pod \"machine-config-daemon-mlgxw\" (UID: \"e8d83309-58b2-40af-ab48-1f8b9aeffefb\") " pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:49:51.458013 master-0 kubenswrapper[19715]: I0313 12:49:51.457955 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hpcb\" (UniqueName: \"kubernetes.io/projected/eda319d8-825a-4881-96a9-5386b87f8a4f-kube-api-access-6hpcb\") pod \"operator-controller-controller-manager-6598bfb6c4-rcfgn\" (UID: \"eda319d8-825a-4881-96a9-5386b87f8a4f\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.480282 master-0 kubenswrapper[19715]: I0313 12:49:51.480150 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wk9km\" (UniqueName: \"kubernetes.io/projected/d53c7e46-86e9-4328-9dfd-aec6deef5c01-kube-api-access-wk9km\") pod \"migrator-57ccdf9b5-xt828\" (UID: \"d53c7e46-86e9-4328-9dfd-aec6deef5c01\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-xt828" Mar 13 12:49:51.498699 master-0 kubenswrapper[19715]: I0313 12:49:51.498647 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpjj6\" (UniqueName: \"kubernetes.io/projected/7574e950-de2e-4f90-99d0-eae3b45cd900-kube-api-access-hpjj6\") pod \"apiserver-8459d5b549-n9fzj\" (UID: \"7574e950-de2e-4f90-99d0-eae3b45cd900\") " pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.518264 master-0 kubenswrapper[19715]: I0313 12:49:51.518219 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qg7nx\" (UniqueName: \"kubernetes.io/projected/cf580693-2931-4fef-adb5-b396f7303352-kube-api-access-qg7nx\") pod \"network-node-identity-kb5r7\" (UID: \"cf580693-2931-4fef-adb5-b396f7303352\") " pod="openshift-network-node-identity/network-node-identity-kb5r7" Mar 13 12:49:51.538769 master-0 kubenswrapper[19715]: I0313 12:49:51.538703 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-992bv\" (UniqueName: \"kubernetes.io/projected/edde8919-104a-4f05-8e21-46787f706bed-kube-api-access-992bv\") pod \"openshift-config-operator-64488f9d78-tml9z\" (UID: \"edde8919-104a-4f05-8e21-46787f706bed\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:51.560701 master-0 kubenswrapper[19715]: I0313 12:49:51.560639 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wqpz\" (UniqueName: \"kubernetes.io/projected/1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53-kube-api-access-9wqpz\") pod \"csi-snapshot-controller-7577d6f48-lf2dh\" (UID: \"1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-lf2dh" Mar 13 12:49:51.580693 master-0 kubenswrapper[19715]: I0313 12:49:51.580625 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg5p4\" (UniqueName: \"kubernetes.io/projected/0ecab24a-cb8c-4171-9a04-c34d1d6d71c1-kube-api-access-dg5p4\") pod \"insights-operator-8f89dfddd-s4gd8\" (UID: \"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1\") " pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" Mar 13 12:49:51.601962 master-0 kubenswrapper[19715]: I0313 12:49:51.601907 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2dq8\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-kube-api-access-c2dq8\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:51.619679 master-0 kubenswrapper[19715]: I0313 12:49:51.618731 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9nhl\" (UniqueName: \"kubernetes.io/projected/ffcc3a23-d81c-4064-a24a-857dbe3222c8-kube-api-access-b9nhl\") pod \"multus-6c7r9\" (UID: \"ffcc3a23-d81c-4064-a24a-857dbe3222c8\") " pod="openshift-multus/multus-6c7r9" Mar 13 12:49:51.639968 master-0 kubenswrapper[19715]: I0313 12:49:51.639896 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"route-controller-manager-9955d496f-8zbkn\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:51.660346 master-0 kubenswrapper[19715]: I0313 12:49:51.660275 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9vsld\" (UniqueName: \"kubernetes.io/projected/73dc5747-2d30-4a2d-a784-1dea1e10811d-kube-api-access-9vsld\") pod \"openshift-apiserver-operator-799b6db4d7-74fhg\" (UID: \"73dc5747-2d30-4a2d-a784-1dea1e10811d\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-74fhg" Mar 13 12:49:51.668822 master-0 kubenswrapper[19715]: I0313 12:49:51.668757 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:51.675251 master-0 kubenswrapper[19715]: I0313 12:49:51.674962 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.678000 master-0 kubenswrapper[19715]: I0313 12:49:51.677945 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:51.678230 master-0 kubenswrapper[19715]: I0313 12:49:51.678043 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:51.679272 master-0 kubenswrapper[19715]: I0313 12:49:51.679169 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:51.679549 master-0 kubenswrapper[19715]: I0313 12:49:51.679519 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-lwxxn" Mar 13 12:49:51.679735 master-0 kubenswrapper[19715]: I0313 12:49:51.679706 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.679787 master-0 kubenswrapper[19715]: I0313 12:49:51.679779 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:51.679901 master-0 kubenswrapper[19715]: I0313 12:49:51.679875 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:51.680103 master-0 kubenswrapper[19715]: I0313 12:49:51.680051 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:49:51.683715 master-0 kubenswrapper[19715]: I0313 12:49:51.683662 19715 scope.go:117] "RemoveContainer" containerID="63e03be6775769ad765af20dfd2ac68f1e500a160a4e77eda15bd7fdcfe1bc2a" Mar 13 12:49:51.683863 master-0 kubenswrapper[19715]: I0313 12:49:51.683780 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-tml9z" Mar 13 12:49:51.685779 master-0 kubenswrapper[19715]: I0313 12:49:51.684959 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9sfh\" (UniqueName: \"kubernetes.io/projected/3f66dbf5-722f-4aed-becb-fb1b62ea7fe6-kube-api-access-r9sfh\") pod \"openshift-controller-manager-operator-8565d84698-b52x8\" (UID: \"3f66dbf5-722f-4aed-becb-fb1b62ea7fe6\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-b52x8" Mar 13 12:49:51.691910 master-0 kubenswrapper[19715]: I0313 12:49:51.691704 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:49:51.694432 master-0 kubenswrapper[19715]: I0313 12:49:51.694403 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.700959 master-0 kubenswrapper[19715]: I0313 12:49:51.699779 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkjph\" (UniqueName: \"kubernetes.io/projected/f2a74c2a-8376-4998-bdc6-02a978f1f568-kube-api-access-bkjph\") pod \"authentication-operator-7c6989d6c4-ztmrr\" (UID: \"f2a74c2a-8376-4998-bdc6-02a978f1f568\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-ztmrr" Mar 13 12:49:51.703459 master-0 kubenswrapper[19715]: I0313 12:49:51.703409 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 13 12:49:51.704399 master-0 kubenswrapper[19715]: I0313 12:49:51.704082 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-rcfgn" Mar 13 12:49:51.704399 master-0 kubenswrapper[19715]: I0313 12:49:51.704159 19715 scope.go:117] "RemoveContainer" containerID="9cc438a36a13c0e2e1f239bcab312b0eda7119d2153cef22f48639612d94c13e" Mar 13 12:49:51.719317 master-0 kubenswrapper[19715]: I0313 12:49:51.719258 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0d868028-9984-472a-8403-ffed767e1bf8-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-nwclt\" (UID: \"0d868028-9984-472a-8403-ffed767e1bf8\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-nwclt" Mar 13 12:49:51.727893 master-0 kubenswrapper[19715]: I0313 12:49:51.724497 19715 scope.go:117] "RemoveContainer" containerID="ce22fd707eb8d075fa41f40a0f4c10a702d0584171d207a5ade9ca190ac33eb6" Mar 13 12:49:51.730563 master-0 kubenswrapper[19715]: I0313 12:49:51.730422 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:49:51.744564 master-0 kubenswrapper[19715]: I0313 12:49:51.742601 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2894g\" (UniqueName: \"kubernetes.io/projected/b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2-kube-api-access-2894g\") pod \"cluster-storage-operator-6fbfc8dc8f-hr4ws\" (UID: \"b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-hr4ws" Mar 13 12:49:51.765277 master-0 kubenswrapper[19715]: I0313 12:49:51.765112 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bk8kt\" (UniqueName: \"kubernetes.io/projected/70c8b79e-4d29-4ae2-a24f-68595d942442-kube-api-access-bk8kt\") pod \"network-check-target-jjmb8\" (UID: \"70c8b79e-4d29-4ae2-a24f-68595d942442\") " pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:49:51.778388 master-0 kubenswrapper[19715]: I0313 12:49:51.778312 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jn59j\" (UniqueName: \"kubernetes.io/projected/6d1a0616-4479-4621-b042-36a586bd8248-kube-api-access-jn59j\") pod \"multus-additional-cni-plugins-wl6w4\" (UID: \"6d1a0616-4479-4621-b042-36a586bd8248\") " pod="openshift-multus/multus-additional-cni-plugins-wl6w4" Mar 13 12:49:51.806496 master-0 kubenswrapper[19715]: I0313 12:49:51.806431 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"installer-5-master-0\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:49:51.842852 master-0 kubenswrapper[19715]: I0313 12:49:51.842755 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cv745\" (UniqueName: \"kubernetes.io/projected/74fa8c05-2d64-4307-9fe3-1d3d69a5aa75-kube-api-access-cv745\") pod \"control-plane-machine-set-operator-6686554ddc-d7qrz\" (UID: \"74fa8c05-2d64-4307-9fe3-1d3d69a5aa75\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-d7qrz" Mar 13 12:49:51.858782 master-0 kubenswrapper[19715]: I0313 12:49:51.858729 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvn5d\" (UniqueName: \"kubernetes.io/projected/14eb83e7-c436-4f10-8cba-29e09a7036a8-kube-api-access-kvn5d\") pod \"machine-config-operator-fdb5c78b5-692fv\" (UID: \"14eb83e7-c436-4f10-8cba-29e09a7036a8\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-692fv" Mar 13 12:49:51.878274 master-0 kubenswrapper[19715]: I0313 12:49:51.878221 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b2lvh\" (UniqueName: \"kubernetes.io/projected/1ad68c2d-762a-47ed-bd56-e823a83b9087-kube-api-access-b2lvh\") pod \"ovnkube-node-vlrf6\" (UID: \"1ad68c2d-762a-47ed-bd56-e823a83b9087\") " pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.898172 master-0 kubenswrapper[19715]: I0313 12:49:51.898103 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vqww\" (UniqueName: \"kubernetes.io/projected/1e9803a4-a166-42dc-9498-57e213602684-kube-api-access-4vqww\") pod \"service-ca-84bfdbbb7f-cgw5c\" (UID: \"1e9803a4-a166-42dc-9498-57e213602684\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-cgw5c" Mar 13 12:49:51.923316 master-0 kubenswrapper[19715]: I0313 12:49:51.923265 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bdvgq\" (UniqueName: \"kubernetes.io/projected/16c2d774-967f-4964-ab4e-eb13c4364f63-kube-api-access-bdvgq\") pod \"cluster-image-registry-operator-86d6d77c7c-cjq8f\" (UID: \"16c2d774-967f-4964-ab4e-eb13c4364f63\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-cjq8f" Mar 13 12:49:51.923597 master-0 kubenswrapper[19715]: I0313 12:49:51.923419 19715 request.go:700] Waited for 1.0149507s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token Mar 13 12:49:51.946488 master-0 kubenswrapper[19715]: I0313 12:49:51.946429 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkcxc\" (UniqueName: \"kubernetes.io/projected/8226ffac-1f76-4eaa-ada5-056b5fd031b4-kube-api-access-gkcxc\") pod \"catalog-operator-7d9c49f57b-zxzfr\" (UID: \"8226ffac-1f76-4eaa-ada5-056b5fd031b4\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:51.963081 master-0 kubenswrapper[19715]: I0313 12:49:51.963017 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27pbr\" (UniqueName: \"kubernetes.io/projected/2a5976df-0366-47b3-bc54-1ba7c249e87c-kube-api-access-27pbr\") pod \"olm-operator-d64cfc9db-d8z4h\" (UID: \"2a5976df-0366-47b3-bc54-1ba7c249e87c\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:51.964355 master-0 kubenswrapper[19715]: I0313 12:49:51.964317 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:51.970339 master-0 kubenswrapper[19715]: I0313 12:49:51.969551 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:51.975075 master-0 kubenswrapper[19715]: I0313 12:49:51.974786 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-zxzfr" Mar 13 12:49:51.977384 master-0 kubenswrapper[19715]: I0313 12:49:51.976038 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-d8z4h" Mar 13 12:49:51.977384 master-0 kubenswrapper[19715]: I0313 12:49:51.976327 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:49:51.978461 master-0 kubenswrapper[19715]: I0313 12:49:51.978413 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-jjmb8" Mar 13 12:49:51.983929 master-0 kubenswrapper[19715]: I0313 12:49:51.983774 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"controller-manager-5ff9c7cb47-f4k6t\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:51.993568 master-0 kubenswrapper[19715]: I0313 12:49:51.992730 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:51.993568 master-0 kubenswrapper[19715]: I0313 12:49:51.992790 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:52.000045 master-0 kubenswrapper[19715]: I0313 12:49:51.999953 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"multus-admission-controller-8d675b596-pbgd4\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:49:52.017787 master-0 kubenswrapper[19715]: I0313 12:49:52.017726 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vgd4v\" (UniqueName: \"kubernetes.io/projected/0943b2db-9658-4a8d-89da-00779d55db6e-kube-api-access-vgd4v\") pod \"apiserver-6f6d949ddd-p9f8k\" (UID: \"0943b2db-9658-4a8d-89da-00779d55db6e\") " pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:52.021673 master-0 kubenswrapper[19715]: I0313 12:49:52.021651 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:52.023145 master-0 kubenswrapper[19715]: I0313 12:49:52.023121 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:49:52.038099 master-0 kubenswrapper[19715]: I0313 12:49:52.038050 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvprm\" (UniqueName: \"kubernetes.io/projected/20217cff-2f81-4a56-9c15-28385c19258c-kube-api-access-nvprm\") pod \"package-server-manager-854648ff6d-w8b7h\" (UID: \"20217cff-2f81-4a56-9c15-28385c19258c\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:52.058256 master-0 kubenswrapper[19715]: I0313 12:49:52.058178 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2jkn\" (UniqueName: \"kubernetes.io/projected/6e55908e-59f3-45a2-82aa-2616c5a2fd52-kube-api-access-x2jkn\") pod \"etcd-operator-5884b9cd56-v5bfn\" (UID: \"6e55908e-59f3-45a2-82aa-2616c5a2fd52\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-v5bfn" Mar 13 12:49:52.080832 master-0 kubenswrapper[19715]: I0313 12:49:52.080760 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c1213b50-28bf-43ff-94c4-20616907735b-bound-sa-token\") pod \"ingress-operator-677db989d6-9nxcz\" (UID: \"c1213b50-28bf-43ff-94c4-20616907735b\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-9nxcz" Mar 13 12:49:52.100236 master-0 kubenswrapper[19715]: I0313 12:49:52.100177 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdcsm\" (UniqueName: \"kubernetes.io/projected/6e4e773c-d970-4f5e-9172-c1ebdb41888d-kube-api-access-tdcsm\") pod \"marketplace-operator-64bf9778cb-7wnld\" (UID: \"6e4e773c-d970-4f5e-9172-c1ebdb41888d\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:52.120352 master-0 kubenswrapper[19715]: I0313 12:49:52.120138 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqhcp\" (UniqueName: \"kubernetes.io/projected/e0763043-3813-43b6-9618-b2d15c942edb-kube-api-access-mqhcp\") pod \"cluster-baremetal-operator-5cdb4c5598-hp84r\" (UID: \"e0763043-3813-43b6-9618-b2d15c942edb\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-hp84r" Mar 13 12:49:52.155826 master-0 kubenswrapper[19715]: I0313 12:49:52.155765 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hflng\" (UniqueName: \"kubernetes.io/projected/f726d662-90e1-45b9-9bba-76a9c03faced-kube-api-access-hflng\") pod \"node-resolver-5jth9\" (UID: \"f726d662-90e1-45b9-9bba-76a9c03faced\") " pod="openshift-dns/node-resolver-5jth9" Mar 13 12:49:52.174971 master-0 kubenswrapper[19715]: I0313 12:49:52.174909 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pr995\" (UniqueName: \"kubernetes.io/projected/cf9f90f5-643f-41e8-a886-7d19fb064afc-kube-api-access-pr995\") pod \"certified-operators-6vng8\" (UID: \"cf9f90f5-643f-41e8-a886-7d19fb064afc\") " pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:52.184090 master-0 kubenswrapper[19715]: I0313 12:49:52.184034 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm25n\" (UniqueName: \"kubernetes.io/projected/f85ab8ab-f9f1-47ad-9c96-9498cef92474-kube-api-access-sm25n\") pod \"dns-operator-589895fbb7-w7mv2\" (UID: \"f85ab8ab-f9f1-47ad-9c96-9498cef92474\") " pod="openshift-dns-operator/dns-operator-589895fbb7-w7mv2" Mar 13 12:49:52.197171 master-0 kubenswrapper[19715]: E0313 12:49:52.197103 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:52.197654 master-0 kubenswrapper[19715]: I0313 12:49:52.197621 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:49:52.225798 master-0 kubenswrapper[19715]: I0313 12:49:52.225734 19715 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:49:52.266497 master-0 kubenswrapper[19715]: I0313 12:49:52.266369 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:52.267875 master-0 kubenswrapper[19715]: I0313 12:49:52.267814 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:52.404977 master-0 kubenswrapper[19715]: I0313 12:49:52.404910 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:52.405302 master-0 kubenswrapper[19715]: I0313 12:49:52.405278 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:49:52.405540 master-0 kubenswrapper[19715]: I0313 12:49:52.405515 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:52.405653 master-0 kubenswrapper[19715]: I0313 12:49:52.405635 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:52.406270 master-0 kubenswrapper[19715]: I0313 12:49:52.406241 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:52.406342 master-0 kubenswrapper[19715]: I0313 12:49:52.406290 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:52.411417 master-0 kubenswrapper[19715]: I0313 12:49:52.410453 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-w8b7h" Mar 13 12:49:52.412120 master-0 kubenswrapper[19715]: I0313 12:49:52.412079 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:49:52.417937 master-0 kubenswrapper[19715]: I0313 12:49:52.417882 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:49:52.462868 master-0 kubenswrapper[19715]: I0313 12:49:52.462798 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:49:52.691980 master-0 kubenswrapper[19715]: I0313 12:49:52.691920 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-check-endpoints/0.log" Mar 13 12:49:52.693955 master-0 kubenswrapper[19715]: I0313 12:49:52.693920 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} Mar 13 12:49:52.694534 master-0 kubenswrapper[19715]: I0313 12:49:52.694519 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:49:53.506861 master-0 kubenswrapper[19715]: I0313 12:49:53.506809 19715 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:49:53.702802 master-0 kubenswrapper[19715]: I0313 12:49:53.702745 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:49:54.112972 master-0 kubenswrapper[19715]: I0313 12:49:54.112881 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:54.118256 master-0 kubenswrapper[19715]: I0313 12:49:54.118219 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 12:49:54.556771 master-0 kubenswrapper[19715]: I0313 12:49:54.556682 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=4.556644275 podStartE2EDuration="4.556644275s" podCreationTimestamp="2026-03-13 12:49:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:49:54.55648326 +0000 UTC m=+21.123156027" watchObservedRunningTime="2026-03-13 12:49:54.556644275 +0000 UTC m=+21.123317032" Mar 13 12:49:55.718078 master-0 kubenswrapper[19715]: I0313 12:49:55.717180 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:49:55.718078 master-0 kubenswrapper[19715]: I0313 12:49:55.717249 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:49:55.726731 master-0 kubenswrapper[19715]: I0313 12:49:55.726681 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80ceb0f9-67e4-4275-8532-85b6602367a2" path="/var/lib/kubelet/pods/80ceb0f9-67e4-4275-8532-85b6602367a2/volumes" Mar 13 12:49:56.688091 master-0 kubenswrapper[19715]: I0313 12:49:56.688031 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:56.693218 master-0 kubenswrapper[19715]: I0313 12:49:56.693160 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-8459d5b549-n9fzj" Mar 13 12:49:57.413067 master-0 kubenswrapper[19715]: I0313 12:49:57.412974 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-6f6d949ddd-p9f8k" Mar 13 12:50:01.129850 master-0 kubenswrapper[19715]: I0313 12:50:01.129778 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-28fdg" Mar 13 12:50:01.716904 master-0 kubenswrapper[19715]: I0313 12:50:01.716842 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6w8hd" Mar 13 12:50:01.722690 master-0 kubenswrapper[19715]: I0313 12:50:01.722644 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:50:01.762888 master-0 kubenswrapper[19715]: I0313 12:50:01.762838 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-92rsn" Mar 13 12:50:02.442810 master-0 kubenswrapper[19715]: I0313 12:50:02.442730 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6vng8" Mar 13 12:50:05.696292 master-0 kubenswrapper[19715]: I0313 12:50:05.696232 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:50:06.809042 master-0 kubenswrapper[19715]: I0313 12:50:06.808983 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"4f7ff4562a79b8bd2c0cbb72f384270ed3c70b557b5276791fba9d8debdb7623"} Mar 13 12:50:07.103287 master-0 kubenswrapper[19715]: I0313 12:50:07.103140 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:09.063828 master-0 kubenswrapper[19715]: I0313 12:50:09.063738 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:50:10.379766 master-0 kubenswrapper[19715]: I0313 12:50:10.379661 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:10.384622 master-0 kubenswrapper[19715]: I0313 12:50:10.384592 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:13.130923 master-0 kubenswrapper[19715]: I0313 12:50:13.130861 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:50:13.131635 master-0 kubenswrapper[19715]: I0313 12:50:13.131122 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" containerID="cri-o://0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d" gracePeriod=5 Mar 13 12:50:16.346791 master-0 kubenswrapper[19715]: I0313 12:50:16.346711 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:50:16.347704 master-0 kubenswrapper[19715]: I0313 12:50:16.346993 19715 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 12:50:16.366996 master-0 kubenswrapper[19715]: I0313 12:50:16.366928 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-vlrf6" Mar 13 12:50:17.107779 master-0 kubenswrapper[19715]: I0313 12:50:17.107630 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:50:18.697335 master-0 kubenswrapper[19715]: I0313 12:50:18.697239 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f417e14665db2ffffa887ce21c9ff0ed/startup-monitor/0.log" Mar 13 12:50:18.697910 master-0 kubenswrapper[19715]: I0313 12:50:18.697461 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:50:18.731742 master-0 kubenswrapper[19715]: I0313 12:50:18.731625 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 12:50:18.731742 master-0 kubenswrapper[19715]: I0313 12:50:18.731723 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 12:50:18.731742 master-0 kubenswrapper[19715]: I0313 12:50:18.731756 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 12:50:18.732250 master-0 kubenswrapper[19715]: I0313 12:50:18.731782 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 12:50:18.732250 master-0 kubenswrapper[19715]: I0313 12:50:18.731791 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:18.732250 master-0 kubenswrapper[19715]: I0313 12:50:18.731852 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock" (OuterVolumeSpecName: "var-lock") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:18.732250 master-0 kubenswrapper[19715]: I0313 12:50:18.731877 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log" (OuterVolumeSpecName: "var-log") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:18.732250 master-0 kubenswrapper[19715]: I0313 12:50:18.731903 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 12:50:18.732880 master-0 kubenswrapper[19715]: I0313 12:50:18.732425 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:18.732880 master-0 kubenswrapper[19715]: I0313 12:50:18.732429 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests" (OuterVolumeSpecName: "manifests") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:18.732880 master-0 kubenswrapper[19715]: I0313 12:50:18.732447 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:18.732880 master-0 kubenswrapper[19715]: I0313 12:50:18.732491 19715 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:18.739460 master-0 kubenswrapper[19715]: I0313 12:50:18.739373 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:18.833506 master-0 kubenswrapper[19715]: I0313 12:50:18.833408 19715 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:18.833506 master-0 kubenswrapper[19715]: I0313 12:50:18.833477 19715 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:18.876902 master-0 kubenswrapper[19715]: I0313 12:50:18.876855 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f417e14665db2ffffa887ce21c9ff0ed/startup-monitor/0.log" Mar 13 12:50:18.877181 master-0 kubenswrapper[19715]: I0313 12:50:18.876912 19715 generic.go:334] "Generic (PLEG): container finished" podID="f417e14665db2ffffa887ce21c9ff0ed" containerID="0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d" exitCode=137 Mar 13 12:50:18.877181 master-0 kubenswrapper[19715]: I0313 12:50:18.876967 19715 scope.go:117] "RemoveContainer" containerID="0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d" Mar 13 12:50:18.877181 master-0 kubenswrapper[19715]: I0313 12:50:18.877032 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:50:18.897093 master-0 kubenswrapper[19715]: I0313 12:50:18.897040 19715 scope.go:117] "RemoveContainer" containerID="0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d" Mar 13 12:50:18.897912 master-0 kubenswrapper[19715]: E0313 12:50:18.897858 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d\": container with ID starting with 0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d not found: ID does not exist" containerID="0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d" Mar 13 12:50:18.898062 master-0 kubenswrapper[19715]: I0313 12:50:18.897938 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d"} err="failed to get container status \"0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d\": rpc error: code = NotFound desc = could not find container \"0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d\": container with ID starting with 0f720fd10430515f2f9c6cdddb2f7cdda1e9db644f746d8096b032eb1b882b5d not found: ID does not exist" Mar 13 12:50:19.702538 master-0 kubenswrapper[19715]: I0313 12:50:19.702490 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f417e14665db2ffffa887ce21c9ff0ed" path="/var/lib/kubelet/pods/f417e14665db2ffffa887ce21c9ff0ed/volumes" Mar 13 12:50:23.885765 master-0 kubenswrapper[19715]: I0313 12:50:23.885670 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7"] Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886049 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886067 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886096 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886106 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886117 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80ceb0f9-67e4-4275-8532-85b6602367a2" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886128 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="80ceb0f9-67e4-4275-8532-85b6602367a2" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886145 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886152 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886172 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886180 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886190 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886197 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886208 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886217 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886228 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886236 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886249 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886257 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886270 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886279 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: E0313 12:50:23.886288 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886295 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886424 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="2352a350-0a7c-4fcd-ba8f-ee9a4c80b132" containerName="assisted-installer-controller" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886446 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="0feecf04-574d-4bf6-968d-77dd5c35260b" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886472 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2ae954b-a362-4cd1-8e54-c4aedcf30a00" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886484 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886494 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc244427-5e4e-441c-a04d-f93aeca9b7c1" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886507 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886521 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="aae10aa9-9c7d-4319-9829-e900af7df301" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886531 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886539 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="80ceb0f9-67e4-4275-8532-85b6602367a2" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886549 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="7028b88a-ef6e-47f7-bbd7-cf798efdded5" containerName="installer" Mar 13 12:50:23.886599 master-0 kubenswrapper[19715]: I0313 12:50:23.886559 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 12:50:23.888314 master-0 kubenswrapper[19715]: I0313 12:50:23.887227 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w"] Mar 13 12:50:23.888715 master-0 kubenswrapper[19715]: I0313 12:50:23.888649 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:23.889307 master-0 kubenswrapper[19715]: I0313 12:50:23.889270 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:23.897554 master-0 kubenswrapper[19715]: I0313 12:50:23.897454 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:50:23.897852 master-0 kubenswrapper[19715]: I0313 12:50:23.897774 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:50:23.897852 master-0 kubenswrapper[19715]: I0313 12:50:23.897815 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:50:23.898031 master-0 kubenswrapper[19715]: I0313 12:50:23.898006 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:50:23.898079 master-0 kubenswrapper[19715]: I0313 12:50:23.898040 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xv4qd" Mar 13 12:50:23.898236 master-0 kubenswrapper[19715]: I0313 12:50:23.898202 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:50:23.898313 master-0 kubenswrapper[19715]: I0313 12:50:23.898280 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:50:23.898313 master-0 kubenswrapper[19715]: I0313 12:50:23.898299 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:50:23.898439 master-0 kubenswrapper[19715]: I0313 12:50:23.898216 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:50:23.899270 master-0 kubenswrapper[19715]: I0313 12:50:23.899235 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dh2b7" Mar 13 12:50:23.899270 master-0 kubenswrapper[19715]: I0313 12:50:23.899254 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:50:23.899532 master-0 kubenswrapper[19715]: I0313 12:50:23.899503 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:50:24.058436 master-0 kubenswrapper[19715]: I0313 12:50:24.058370 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.058436 master-0 kubenswrapper[19715]: I0313 12:50:24.058445 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058474 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058544 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058629 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058658 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba96d419-dfd0-49ff-baf3-041262e8867e-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058702 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdnj8\" (UniqueName: \"kubernetes.io/projected/ba96d419-dfd0-49ff-baf3-041262e8867e-kube-api-access-gdnj8\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058753 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.058831 master-0 kubenswrapper[19715]: I0313 12:50:24.058781 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k468m\" (UniqueName: \"kubernetes.io/projected/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-kube-api-access-k468m\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.159719 master-0 kubenswrapper[19715]: I0313 12:50:24.159539 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k468m\" (UniqueName: \"kubernetes.io/projected/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-kube-api-access-k468m\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.159968 master-0 kubenswrapper[19715]: I0313 12:50:24.159777 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.160135 master-0 kubenswrapper[19715]: I0313 12:50:24.160082 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.160688 master-0 kubenswrapper[19715]: I0313 12:50:24.160654 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.160791 master-0 kubenswrapper[19715]: I0313 12:50:24.159814 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.160791 master-0 kubenswrapper[19715]: I0313 12:50:24.160750 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.160791 master-0 kubenswrapper[19715]: I0313 12:50:24.160773 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.161395 master-0 kubenswrapper[19715]: I0313 12:50:24.161366 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.161492 master-0 kubenswrapper[19715]: I0313 12:50:24.161431 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.161492 master-0 kubenswrapper[19715]: I0313 12:50:24.161454 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba96d419-dfd0-49ff-baf3-041262e8867e-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.161846 master-0 kubenswrapper[19715]: I0313 12:50:24.161819 19715 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 12:50:24.162123 master-0 kubenswrapper[19715]: I0313 12:50:24.162102 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdnj8\" (UniqueName: \"kubernetes.io/projected/ba96d419-dfd0-49ff-baf3-041262e8867e-kube-api-access-gdnj8\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.162192 master-0 kubenswrapper[19715]: I0313 12:50:24.162141 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.162570 master-0 kubenswrapper[19715]: I0313 12:50:24.162545 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.164380 master-0 kubenswrapper[19715]: I0313 12:50:24.163193 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/ba96d419-dfd0-49ff-baf3-041262e8867e-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.166008 master-0 kubenswrapper[19715]: I0313 12:50:24.165967 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/ba96d419-dfd0-49ff-baf3-041262e8867e-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.166128 master-0 kubenswrapper[19715]: I0313 12:50:24.166100 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.180097 master-0 kubenswrapper[19715]: I0313 12:50:24.180043 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k468m\" (UniqueName: \"kubernetes.io/projected/5c1c87ba-53c4-4b52-88e2-a3ed2d801393-kube-api-access-k468m\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w\" (UID: \"5c1c87ba-53c4-4b52-88e2-a3ed2d801393\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.183132 master-0 kubenswrapper[19715]: I0313 12:50:24.183089 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdnj8\" (UniqueName: \"kubernetes.io/projected/ba96d419-dfd0-49ff-baf3-041262e8867e-kube-api-access-gdnj8\") pod \"machine-approver-754bdc9f9d-jvwf7\" (UID: \"ba96d419-dfd0-49ff-baf3-041262e8867e\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.222944 master-0 kubenswrapper[19715]: I0313 12:50:24.222885 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" Mar 13 12:50:24.251467 master-0 kubenswrapper[19715]: I0313 12:50:24.251405 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" Mar 13 12:50:24.251674 master-0 kubenswrapper[19715]: W0313 12:50:24.251273 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c1c87ba_53c4_4b52_88e2_a3ed2d801393.slice/crio-2c773e1960a879181064d8d213c3a35b3905544e7fe844ace1d8a54155c881e4 WatchSource:0}: Error finding container 2c773e1960a879181064d8d213c3a35b3905544e7fe844ace1d8a54155c881e4: Status 404 returned error can't find the container with id 2c773e1960a879181064d8d213c3a35b3905544e7fe844ace1d8a54155c881e4 Mar 13 12:50:24.924161 master-0 kubenswrapper[19715]: I0313 12:50:24.924096 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" event={"ID":"5c1c87ba-53c4-4b52-88e2-a3ed2d801393","Type":"ContainerStarted","Data":"39280bfecabdbb9473ef6ced1d0474faf3f22a2f3359c1cb78b316f988bc00a7"} Mar 13 12:50:24.924161 master-0 kubenswrapper[19715]: I0313 12:50:24.924151 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" event={"ID":"5c1c87ba-53c4-4b52-88e2-a3ed2d801393","Type":"ContainerStarted","Data":"ef36af4e69ace4e8985ef10144e2090b1ba9d0a8ba16233994ec5f5c363d716d"} Mar 13 12:50:24.924161 master-0 kubenswrapper[19715]: I0313 12:50:24.924166 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" event={"ID":"5c1c87ba-53c4-4b52-88e2-a3ed2d801393","Type":"ContainerStarted","Data":"2c773e1960a879181064d8d213c3a35b3905544e7fe844ace1d8a54155c881e4"} Mar 13 12:50:24.927320 master-0 kubenswrapper[19715]: I0313 12:50:24.927279 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" event={"ID":"ba96d419-dfd0-49ff-baf3-041262e8867e","Type":"ContainerStarted","Data":"4f918753a295803117a33de9671ffe62931f62963e5a21bc890395cd04e4847e"} Mar 13 12:50:24.927456 master-0 kubenswrapper[19715]: I0313 12:50:24.927330 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" event={"ID":"ba96d419-dfd0-49ff-baf3-041262e8867e","Type":"ContainerStarted","Data":"66f3cc8bccbc42b53f4e379bc0814e8d700d4da9206b7562d7bfa84a4ab078b4"} Mar 13 12:50:24.927456 master-0 kubenswrapper[19715]: I0313 12:50:24.927350 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" event={"ID":"ba96d419-dfd0-49ff-baf3-041262e8867e","Type":"ContainerStarted","Data":"785ec83babb46378da46c27cee1caddb6d854b7c8028394d2d0422ff7a89649c"} Mar 13 12:50:25.717470 master-0 kubenswrapper[19715]: I0313 12:50:25.717399 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:50:25.717740 master-0 kubenswrapper[19715]: I0313 12:50:25.717478 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:50:25.938242 master-0 kubenswrapper[19715]: I0313 12:50:25.938137 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" event={"ID":"5c1c87ba-53c4-4b52-88e2-a3ed2d801393","Type":"ContainerStarted","Data":"ff58e1ccf74ee3c50d25948cb78fbfb553557f10fd050512b6bc20b8f14225fe"} Mar 13 12:50:25.954419 master-0 kubenswrapper[19715]: I0313 12:50:25.954344 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w" podStartSLOduration=2.95432441 podStartE2EDuration="2.95432441s" podCreationTimestamp="2026-03-13 12:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:25.952803951 +0000 UTC m=+52.519476718" watchObservedRunningTime="2026-03-13 12:50:25.95432441 +0000 UTC m=+52.520997167" Mar 13 12:50:25.954961 master-0 kubenswrapper[19715]: I0313 12:50:25.954932 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-jvwf7" podStartSLOduration=2.954923998 podStartE2EDuration="2.954923998s" podCreationTimestamp="2026-03-13 12:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:24.941827004 +0000 UTC m=+51.508499761" watchObservedRunningTime="2026-03-13 12:50:25.954923998 +0000 UTC m=+52.521596765" Mar 13 12:50:28.832150 master-0 kubenswrapper[19715]: I0313 12:50:28.831869 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-fntms"] Mar 13 12:50:28.834415 master-0 kubenswrapper[19715]: I0313 12:50:28.833039 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:28.835397 master-0 kubenswrapper[19715]: I0313 12:50:28.835345 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ct6jh" Mar 13 12:50:28.849657 master-0 kubenswrapper[19715]: I0313 12:50:28.849563 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-fntms"] Mar 13 12:50:29.028401 master-0 kubenswrapper[19715]: I0313 12:50:29.027951 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.028401 master-0 kubenswrapper[19715]: I0313 12:50:29.028099 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lw5vx\" (UniqueName: \"kubernetes.io/projected/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-kube-api-access-lw5vx\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.137712 master-0 kubenswrapper[19715]: I0313 12:50:29.132288 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lw5vx\" (UniqueName: \"kubernetes.io/projected/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-kube-api-access-lw5vx\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.137712 master-0 kubenswrapper[19715]: I0313 12:50:29.132423 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.154462 master-0 kubenswrapper[19715]: I0313 12:50:29.154334 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-webhook-certs\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.156599 master-0 kubenswrapper[19715]: I0313 12:50:29.155715 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lw5vx\" (UniqueName: \"kubernetes.io/projected/d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7-kube-api-access-lw5vx\") pod \"multus-admission-controller-7769569c45-fntms\" (UID: \"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7\") " pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.341742 master-0 kubenswrapper[19715]: I0313 12:50:29.341685 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" Mar 13 12:50:29.376860 master-0 kubenswrapper[19715]: I0313 12:50:29.376789 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:50:29.377764 master-0 kubenswrapper[19715]: I0313 12:50:29.377563 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.381144 master-0 kubenswrapper[19715]: I0313 12:50:29.380919 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-jdg75" Mar 13 12:50:29.382395 master-0 kubenswrapper[19715]: I0313 12:50:29.382344 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:50:29.389815 master-0 kubenswrapper[19715]: I0313 12:50:29.389265 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:50:29.540079 master-0 kubenswrapper[19715]: I0313 12:50:29.539982 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.540408 master-0 kubenswrapper[19715]: I0313 12:50:29.540108 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.540408 master-0 kubenswrapper[19715]: I0313 12:50:29.540145 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.644123 master-0 kubenswrapper[19715]: I0313 12:50:29.643897 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.644123 master-0 kubenswrapper[19715]: I0313 12:50:29.643974 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.644429 master-0 kubenswrapper[19715]: I0313 12:50:29.644129 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.644429 master-0 kubenswrapper[19715]: I0313 12:50:29.644233 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.644429 master-0 kubenswrapper[19715]: I0313 12:50:29.644385 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.661466 master-0 kubenswrapper[19715]: I0313 12:50:29.660590 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access\") pod \"installer-3-master-0\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.737016 master-0 kubenswrapper[19715]: I0313 12:50:29.736914 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:50:29.830059 master-0 kubenswrapper[19715]: I0313 12:50:29.829992 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-7769569c45-fntms"] Mar 13 12:50:29.846950 master-0 kubenswrapper[19715]: W0313 12:50:29.846879 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1f58cc0_8cd6_48a1_a3c5_b40a8bfeafb7.slice/crio-1187c578cd1ef71364a0647429915ed55193e617dcb25ea2449edeb4da342bb0 WatchSource:0}: Error finding container 1187c578cd1ef71364a0647429915ed55193e617dcb25ea2449edeb4da342bb0: Status 404 returned error can't find the container with id 1187c578cd1ef71364a0647429915ed55193e617dcb25ea2449edeb4da342bb0 Mar 13 12:50:30.046816 master-0 kubenswrapper[19715]: I0313 12:50:30.046659 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" event={"ID":"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7","Type":"ContainerStarted","Data":"1187c578cd1ef71364a0647429915ed55193e617dcb25ea2449edeb4da342bb0"} Mar 13 12:50:30.213305 master-0 kubenswrapper[19715]: I0313 12:50:30.213021 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:50:30.224111 master-0 kubenswrapper[19715]: W0313 12:50:30.224043 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod525e96d8_c2f8_43d4_9ab0_9077604d3b13.slice/crio-8c1ea5dbebfa541e8e11f166e99cd7d351057304a571bdb2effbb5df6901ccb1 WatchSource:0}: Error finding container 8c1ea5dbebfa541e8e11f166e99cd7d351057304a571bdb2effbb5df6901ccb1: Status 404 returned error can't find the container with id 8c1ea5dbebfa541e8e11f166e99cd7d351057304a571bdb2effbb5df6901ccb1 Mar 13 12:50:31.066062 master-0 kubenswrapper[19715]: I0313 12:50:31.065968 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"525e96d8-c2f8-43d4-9ab0-9077604d3b13","Type":"ContainerStarted","Data":"3ea05128587e284d92ee0b905fc3c850bb6638d76cc0a940ca8662cb8a1e2d30"} Mar 13 12:50:31.066821 master-0 kubenswrapper[19715]: I0313 12:50:31.066091 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"525e96d8-c2f8-43d4-9ab0-9077604d3b13","Type":"ContainerStarted","Data":"8c1ea5dbebfa541e8e11f166e99cd7d351057304a571bdb2effbb5df6901ccb1"} Mar 13 12:50:31.071375 master-0 kubenswrapper[19715]: I0313 12:50:31.071325 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" event={"ID":"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7","Type":"ContainerStarted","Data":"09485ab80f9d0f5ced7289856bc7eb9a5cd0b1e960084ef03a4fa5d9c7b50d28"} Mar 13 12:50:31.071645 master-0 kubenswrapper[19715]: I0313 12:50:31.071381 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" event={"ID":"d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7","Type":"ContainerStarted","Data":"7ebd1773007a604097449989448f0b979e4b551ead6bc48f7f6d125c5737b425"} Mar 13 12:50:31.097142 master-0 kubenswrapper[19715]: I0313 12:50:31.096892 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.09685771 podStartE2EDuration="2.09685771s" podCreationTimestamp="2026-03-13 12:50:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:31.095491687 +0000 UTC m=+57.662164464" watchObservedRunningTime="2026-03-13 12:50:31.09685771 +0000 UTC m=+57.663530467" Mar 13 12:50:31.143716 master-0 kubenswrapper[19715]: I0313 12:50:31.140489 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-7769569c45-fntms" podStartSLOduration=3.140463277 podStartE2EDuration="3.140463277s" podCreationTimestamp="2026-03-13 12:50:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:31.112458469 +0000 UTC m=+57.679131236" watchObservedRunningTime="2026-03-13 12:50:31.140463277 +0000 UTC m=+57.707136034" Mar 13 12:50:31.143716 master-0 kubenswrapper[19715]: I0313 12:50:31.141043 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:50:31.143716 master-0 kubenswrapper[19715]: I0313 12:50:31.141330 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="multus-admission-controller" containerID="cri-o://a77c66d0bbef5ac4ba841e64d029a75b81101530693d755adf73cb234d47aa31" gracePeriod=30 Mar 13 12:50:31.143716 master-0 kubenswrapper[19715]: I0313 12:50:31.142479 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="kube-rbac-proxy" containerID="cri-o://5c411b542b6c604fb634e20ec1667bd444b32f47270e3ec6baff792160a18f75" gracePeriod=30 Mar 13 12:50:32.080531 master-0 kubenswrapper[19715]: I0313 12:50:32.080358 19715 generic.go:334] "Generic (PLEG): container finished" podID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerID="5c411b542b6c604fb634e20ec1667bd444b32f47270e3ec6baff792160a18f75" exitCode=0 Mar 13 12:50:32.080531 master-0 kubenswrapper[19715]: I0313 12:50:32.080454 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerDied","Data":"5c411b542b6c604fb634e20ec1667bd444b32f47270e3ec6baff792160a18f75"} Mar 13 12:50:35.887067 master-0 kubenswrapper[19715]: I0313 12:50:35.886980 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:50:35.887907 master-0 kubenswrapper[19715]: I0313 12:50:35.887355 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://9eb4b2e62b81effa2b30fc9741ea362aa4ef66b19b64c96e124eb88cbf1ef364" gracePeriod=30 Mar 13 12:50:35.888553 master-0 kubenswrapper[19715]: I0313 12:50:35.888428 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:50:35.888879 master-0 kubenswrapper[19715]: E0313 12:50:35.888845 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.888966 master-0 kubenswrapper[19715]: I0313 12:50:35.888880 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.888966 master-0 kubenswrapper[19715]: E0313 12:50:35.888913 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.888966 master-0 kubenswrapper[19715]: I0313 12:50:35.888924 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.888966 master-0 kubenswrapper[19715]: E0313 12:50:35.888938 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.888966 master-0 kubenswrapper[19715]: I0313 12:50:35.888947 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.889376 master-0 kubenswrapper[19715]: I0313 12:50:35.889342 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.889444 master-0 kubenswrapper[19715]: I0313 12:50:35.889388 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.889444 master-0 kubenswrapper[19715]: I0313 12:50:35.889400 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 12:50:35.890795 master-0 kubenswrapper[19715]: I0313 12:50:35.890754 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:35.927077 master-0 kubenswrapper[19715]: I0313 12:50:35.927017 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:50:35.962152 master-0 kubenswrapper[19715]: I0313 12:50:35.962078 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:35.962389 master-0 kubenswrapper[19715]: I0313 12:50:35.962183 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:36.050458 master-0 kubenswrapper[19715]: I0313 12:50:36.050396 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:50:36.063398 master-0 kubenswrapper[19715]: I0313 12:50:36.063340 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:36.063398 master-0 kubenswrapper[19715]: I0313 12:50:36.063410 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:36.063771 master-0 kubenswrapper[19715]: I0313 12:50:36.063464 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:36.063771 master-0 kubenswrapper[19715]: I0313 12:50:36.063475 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:36.066250 master-0 kubenswrapper[19715]: I0313 12:50:36.066194 19715 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f27a6996-dd98-4bd2-a53f-c5c87f947921" Mar 13 12:50:36.112376 master-0 kubenswrapper[19715]: I0313 12:50:36.112310 19715 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="9eb4b2e62b81effa2b30fc9741ea362aa4ef66b19b64c96e124eb88cbf1ef364" exitCode=0 Mar 13 12:50:36.112376 master-0 kubenswrapper[19715]: I0313 12:50:36.112370 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ca332c737617b378f3c1c4f9b97d553fc831a36e421673003c1de189953cd04" Mar 13 12:50:36.112376 master-0 kubenswrapper[19715]: I0313 12:50:36.112406 19715 scope.go:117] "RemoveContainer" containerID="20738ab02637717910251883b8d669f0a85804f124bfcd78ee15eab7a5a827e7" Mar 13 12:50:36.112737 master-0 kubenswrapper[19715]: I0313 12:50:36.112527 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 12:50:36.164785 master-0 kubenswrapper[19715]: I0313 12:50:36.164623 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 12:50:36.165090 master-0 kubenswrapper[19715]: I0313 12:50:36.164783 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 12:50:36.165090 master-0 kubenswrapper[19715]: I0313 12:50:36.164861 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:36.165090 master-0 kubenswrapper[19715]: I0313 12:50:36.164891 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:36.165270 master-0 kubenswrapper[19715]: I0313 12:50:36.165199 19715 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:36.165270 master-0 kubenswrapper[19715]: I0313 12:50:36.165225 19715 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:36.226160 master-0 kubenswrapper[19715]: I0313 12:50:36.225975 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:37.121305 master-0 kubenswrapper[19715]: I0313 12:50:37.121179 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"4e2cfb308e87476917dc63b51b8f4ff3598a6a7c3eff81f201ee2f39a779bdc1"} Mar 13 12:50:37.121305 master-0 kubenswrapper[19715]: I0313 12:50:37.121111 19715 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="4e2cfb308e87476917dc63b51b8f4ff3598a6a7c3eff81f201ee2f39a779bdc1" exitCode=0 Mar 13 12:50:37.121935 master-0 kubenswrapper[19715]: I0313 12:50:37.121299 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"eef54790236aafb1ff6e4d20cddad15b6274928c80b4b8b66f54b00403de14ff"} Mar 13 12:50:37.124037 master-0 kubenswrapper[19715]: I0313 12:50:37.123980 19715 generic.go:334] "Generic (PLEG): container finished" podID="07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" containerID="0d4bb79902a72b9f34162023ea867b8ebd9dc8bf3badc80d03372122dc90b2a4" exitCode=0 Mar 13 12:50:37.124037 master-0 kubenswrapper[19715]: I0313 12:50:37.124022 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da","Type":"ContainerDied","Data":"0d4bb79902a72b9f34162023ea867b8ebd9dc8bf3badc80d03372122dc90b2a4"} Mar 13 12:50:37.706651 master-0 kubenswrapper[19715]: I0313 12:50:37.706089 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 13 12:50:37.706651 master-0 kubenswrapper[19715]: I0313 12:50:37.706486 19715 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 12:50:37.733926 master-0 kubenswrapper[19715]: I0313 12:50:37.733757 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:50:37.734176 master-0 kubenswrapper[19715]: I0313 12:50:37.734148 19715 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f27a6996-dd98-4bd2-a53f-c5c87f947921" Mar 13 12:50:37.737702 master-0 kubenswrapper[19715]: I0313 12:50:37.736909 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 12:50:37.737702 master-0 kubenswrapper[19715]: I0313 12:50:37.736951 19715 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="f27a6996-dd98-4bd2-a53f-c5c87f947921" Mar 13 12:50:38.133460 master-0 kubenswrapper[19715]: I0313 12:50:38.133370 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"5e26810c41b04d6b7b18d460530be0d6b5cfdaf88d1a68d92b5c14e7b7261ce3"} Mar 13 12:50:38.134112 master-0 kubenswrapper[19715]: I0313 12:50:38.133468 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"9558436851ea5e9f09168e4882a85b318bea857709da4a1c87ae463ce4701ae4"} Mar 13 12:50:38.134112 master-0 kubenswrapper[19715]: I0313 12:50:38.133496 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"e0df16178a78e597a7ee479c2a01d936d3b8faaeddfcab7a0e0bd1705858f6b0"} Mar 13 12:50:38.148934 master-0 kubenswrapper[19715]: I0313 12:50:38.148835 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=3.148800477 podStartE2EDuration="3.148800477s" podCreationTimestamp="2026-03-13 12:50:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:38.147742093 +0000 UTC m=+64.714414860" watchObservedRunningTime="2026-03-13 12:50:38.148800477 +0000 UTC m=+64.715473224" Mar 13 12:50:38.498677 master-0 kubenswrapper[19715]: I0313 12:50:38.498627 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:50:38.579299 master-0 kubenswrapper[19715]: I0313 12:50:38.579231 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") pod \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " Mar 13 12:50:38.579299 master-0 kubenswrapper[19715]: I0313 12:50:38.579296 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") pod \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " Mar 13 12:50:38.579668 master-0 kubenswrapper[19715]: I0313 12:50:38.579352 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") pod \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\" (UID: \"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da\") " Mar 13 12:50:38.579812 master-0 kubenswrapper[19715]: I0313 12:50:38.579779 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock" (OuterVolumeSpecName: "var-lock") pod "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" (UID: "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:38.579963 master-0 kubenswrapper[19715]: I0313 12:50:38.579942 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" (UID: "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:50:38.583265 master-0 kubenswrapper[19715]: I0313 12:50:38.583217 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" (UID: "07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:50:38.681328 master-0 kubenswrapper[19715]: I0313 12:50:38.681228 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:38.681328 master-0 kubenswrapper[19715]: I0313 12:50:38.681279 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:38.681328 master-0 kubenswrapper[19715]: I0313 12:50:38.681290 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:50:39.146909 master-0 kubenswrapper[19715]: I0313 12:50:39.146838 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 12:50:39.147770 master-0 kubenswrapper[19715]: I0313 12:50:39.146940 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da","Type":"ContainerDied","Data":"72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651"} Mar 13 12:50:39.147770 master-0 kubenswrapper[19715]: I0313 12:50:39.146970 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72b959a542e46e3641183520e8e6d5e56a7222530509233edfd3479ba9158651" Mar 13 12:50:39.147770 master-0 kubenswrapper[19715]: I0313 12:50:39.147043 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:50:42.126021 master-0 kubenswrapper[19715]: I0313 12:50:42.125933 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-qhg45"] Mar 13 12:50:42.126709 master-0 kubenswrapper[19715]: E0313 12:50:42.126284 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" containerName="installer" Mar 13 12:50:42.126709 master-0 kubenswrapper[19715]: I0313 12:50:42.126305 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" containerName="installer" Mar 13 12:50:42.126709 master-0 kubenswrapper[19715]: I0313 12:50:42.126509 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="07ccaa2e-0cf2-4205-b1e7-0d5b9d5fe4da" containerName="installer" Mar 13 12:50:42.127287 master-0 kubenswrapper[19715]: I0313 12:50:42.127087 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.130801 master-0 kubenswrapper[19715]: I0313 12:50:42.130746 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 12:50:42.131125 master-0 kubenswrapper[19715]: I0313 12:50:42.131096 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 12:50:42.131652 master-0 kubenswrapper[19715]: I0313 12:50:42.131621 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 12:50:42.131990 master-0 kubenswrapper[19715]: I0313 12:50:42.131955 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 12:50:42.133006 master-0 kubenswrapper[19715]: I0313 12:50:42.132963 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 12:50:42.137837 master-0 kubenswrapper[19715]: I0313 12:50:42.137774 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-qhg45"] Mar 13 12:50:42.274531 master-0 kubenswrapper[19715]: I0313 12:50:42.274453 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572e278b-c463-49b0-a198-49bd9e2c288c-serving-cert\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.274531 master-0 kubenswrapper[19715]: I0313 12:50:42.274526 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ksbq\" (UniqueName: \"kubernetes.io/projected/572e278b-c463-49b0-a198-49bd9e2c288c-kube-api-access-2ksbq\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.274862 master-0 kubenswrapper[19715]: I0313 12:50:42.274564 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.274862 master-0 kubenswrapper[19715]: I0313 12:50:42.274771 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-config\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.375928 master-0 kubenswrapper[19715]: I0313 12:50:42.375857 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572e278b-c463-49b0-a198-49bd9e2c288c-serving-cert\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.375928 master-0 kubenswrapper[19715]: I0313 12:50:42.375922 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ksbq\" (UniqueName: \"kubernetes.io/projected/572e278b-c463-49b0-a198-49bd9e2c288c-kube-api-access-2ksbq\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.376255 master-0 kubenswrapper[19715]: I0313 12:50:42.376090 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.376255 master-0 kubenswrapper[19715]: I0313 12:50:42.376157 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-config\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.376781 master-0 kubenswrapper[19715]: E0313 12:50:42.376733 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca podName:572e278b-c463-49b0-a198-49bd9e2c288c nodeName:}" failed. No retries permitted until 2026-03-13 12:50:42.876662599 +0000 UTC m=+69.443335356 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca") pod "console-operator-6c7fb6b958-qhg45" (UID: "572e278b-c463-49b0-a198-49bd9e2c288c") : configmap references non-existent config key: ca-bundle.crt Mar 13 12:50:42.377456 master-0 kubenswrapper[19715]: I0313 12:50:42.377405 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-config\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.380489 master-0 kubenswrapper[19715]: I0313 12:50:42.380422 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/572e278b-c463-49b0-a198-49bd9e2c288c-serving-cert\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.459855 master-0 kubenswrapper[19715]: I0313 12:50:42.459349 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ksbq\" (UniqueName: \"kubernetes.io/projected/572e278b-c463-49b0-a198-49bd9e2c288c-kube-api-access-2ksbq\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.774028 master-0 kubenswrapper[19715]: I0313 12:50:42.773949 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:50:42.774499 master-0 kubenswrapper[19715]: I0313 12:50:42.774371 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-master-0" podUID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" containerName="installer" containerID="cri-o://3ea05128587e284d92ee0b905fc3c850bb6638d76cc0a940ca8662cb8a1e2d30" gracePeriod=30 Mar 13 12:50:42.881364 master-0 kubenswrapper[19715]: I0313 12:50:42.881295 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:42.882640 master-0 kubenswrapper[19715]: I0313 12:50:42.882595 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/572e278b-c463-49b0-a198-49bd9e2c288c-trusted-ca\") pod \"console-operator-6c7fb6b958-qhg45\" (UID: \"572e278b-c463-49b0-a198-49bd9e2c288c\") " pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:43.043617 master-0 kubenswrapper[19715]: I0313 12:50:43.043447 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:43.495784 master-0 kubenswrapper[19715]: I0313 12:50:43.495641 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-qhg45"] Mar 13 12:50:43.502566 master-0 kubenswrapper[19715]: W0313 12:50:43.502484 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod572e278b_c463_49b0_a198_49bd9e2c288c.slice/crio-c31a180240b8f4808a956ab2ce115991df282ff959cf508bc3c89c54eff8a1aa WatchSource:0}: Error finding container c31a180240b8f4808a956ab2ce115991df282ff959cf508bc3c89c54eff8a1aa: Status 404 returned error can't find the container with id c31a180240b8f4808a956ab2ce115991df282ff959cf508bc3c89c54eff8a1aa Mar 13 12:50:43.505416 master-0 kubenswrapper[19715]: I0313 12:50:43.505371 19715 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:50:44.196327 master-0 kubenswrapper[19715]: I0313 12:50:44.195784 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" event={"ID":"572e278b-c463-49b0-a198-49bd9e2c288c","Type":"ContainerStarted","Data":"c31a180240b8f4808a956ab2ce115991df282ff959cf508bc3c89c54eff8a1aa"} Mar 13 12:50:44.974270 master-0 kubenswrapper[19715]: I0313 12:50:44.974186 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:50:44.975294 master-0 kubenswrapper[19715]: I0313 12:50:44.975258 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:44.999549 master-0 kubenswrapper[19715]: I0313 12:50:44.999501 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:50:45.083065 master-0 kubenswrapper[19715]: I0313 12:50:45.082993 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.083065 master-0 kubenswrapper[19715]: I0313 12:50:45.083056 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.083370 master-0 kubenswrapper[19715]: I0313 12:50:45.083080 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.288004 master-0 kubenswrapper[19715]: I0313 12:50:45.263214 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.288004 master-0 kubenswrapper[19715]: I0313 12:50:45.263269 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.288004 master-0 kubenswrapper[19715]: I0313 12:50:45.263291 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.288004 master-0 kubenswrapper[19715]: I0313 12:50:45.263384 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.288004 master-0 kubenswrapper[19715]: I0313 12:50:45.264099 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.319646 master-0 kubenswrapper[19715]: I0313 12:50:45.319561 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd"] Mar 13 12:50:45.320525 master-0 kubenswrapper[19715]: I0313 12:50:45.320468 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.328221 master-0 kubenswrapper[19715]: I0313 12:50:45.328068 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:50:45.346427 master-0 kubenswrapper[19715]: I0313 12:50:45.346376 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access\") pod \"installer-4-master-0\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.381651 master-0 kubenswrapper[19715]: I0313 12:50:45.379826 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd"] Mar 13 12:50:45.465378 master-0 kubenswrapper[19715]: I0313 12:50:45.465306 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b4ld\" (UniqueName: \"kubernetes.io/projected/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-kube-api-access-7b4ld\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.465378 master-0 kubenswrapper[19715]: I0313 12:50:45.465376 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.465747 master-0 kubenswrapper[19715]: I0313 12:50:45.465426 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.571210 master-0 kubenswrapper[19715]: I0313 12:50:45.571054 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7b4ld\" (UniqueName: \"kubernetes.io/projected/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-kube-api-access-7b4ld\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.571210 master-0 kubenswrapper[19715]: I0313 12:50:45.571201 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.571676 master-0 kubenswrapper[19715]: I0313 12:50:45.571368 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.574333 master-0 kubenswrapper[19715]: I0313 12:50:45.572810 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.580963 master-0 kubenswrapper[19715]: I0313 12:50:45.580083 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.592628 master-0 kubenswrapper[19715]: I0313 12:50:45.592512 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b4ld\" (UniqueName: \"kubernetes.io/projected/5837d1cb-aa4e-465e-ac1a-92a775c89f6b-kube-api-access-7b4ld\") pod \"machine-config-controller-ff46b7bdf-8gbgd\" (UID: \"5837d1cb-aa4e-465e-ac1a-92a775c89f6b\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:45.720708 master-0 kubenswrapper[19715]: I0313 12:50:45.720629 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:50:45.720989 master-0 kubenswrapper[19715]: I0313 12:50:45.720913 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" Mar 13 12:50:46.875025 master-0 kubenswrapper[19715]: I0313 12:50:46.874958 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd"] Mar 13 12:50:46.886483 master-0 kubenswrapper[19715]: W0313 12:50:46.886392 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5837d1cb_aa4e_465e_ac1a_92a775c89f6b.slice/crio-b3c1f099d894bb6a8575e55c34518935f94b526b3a8a20329d2f4c6bba8d5c6f WatchSource:0}: Error finding container b3c1f099d894bb6a8575e55c34518935f94b526b3a8a20329d2f4c6bba8d5c6f: Status 404 returned error can't find the container with id b3c1f099d894bb6a8575e55c34518935f94b526b3a8a20329d2f4c6bba8d5c6f Mar 13 12:50:46.939921 master-0 kubenswrapper[19715]: I0313 12:50:46.939849 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 12:50:47.261311 master-0 kubenswrapper[19715]: I0313 12:50:47.261230 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49"] Mar 13 12:50:47.263608 master-0 kubenswrapper[19715]: I0313 12:50:47.262049 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" Mar 13 12:50:47.269352 master-0 kubenswrapper[19715]: I0313 12:50:47.269278 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8cz8\" (UniqueName: \"kubernetes.io/projected/9a42a89e-11dd-4a5d-a0b8-97b57e937f08-kube-api-access-s8cz8\") pod \"network-check-source-7c67b67d47-7gg49\" (UID: \"9a42a89e-11dd-4a5d-a0b8-97b57e937f08\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" Mar 13 12:50:47.270507 master-0 kubenswrapper[19715]: I0313 12:50:47.270455 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-q5h8k"] Mar 13 12:50:47.271608 master-0 kubenswrapper[19715]: I0313 12:50:47.271560 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.272427 master-0 kubenswrapper[19715]: I0313 12:50:47.272386 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc"] Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.276956 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.276974 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.277436 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.277568 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.277840 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.278297 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:50:47.279690 master-0 kubenswrapper[19715]: I0313 12:50:47.278564 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:50:47.287543 master-0 kubenswrapper[19715]: I0313 12:50:47.287492 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-g589p" Mar 13 12:50:47.287955 master-0 kubenswrapper[19715]: I0313 12:50:47.287933 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:50:47.292165 master-0 kubenswrapper[19715]: I0313 12:50:47.292123 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l2xgj"] Mar 13 12:50:47.295864 master-0 kubenswrapper[19715]: I0313 12:50:47.295822 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.298878 master-0 kubenswrapper[19715]: I0313 12:50:47.298802 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49"] Mar 13 12:50:47.303231 master-0 kubenswrapper[19715]: I0313 12:50:47.303169 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-rz8kc" Mar 13 12:50:47.303458 master-0 kubenswrapper[19715]: I0313 12:50:47.303372 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 13 12:50:47.326151 master-0 kubenswrapper[19715]: I0313 12:50:47.325206 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/0.log" Mar 13 12:50:47.326151 master-0 kubenswrapper[19715]: I0313 12:50:47.325273 19715 generic.go:334] "Generic (PLEG): container finished" podID="572e278b-c463-49b0-a198-49bd9e2c288c" containerID="08c8e93a77db246f3a012b8da27c0b68cd74a35397e793d6731a402bb45e9631" exitCode=255 Mar 13 12:50:47.326151 master-0 kubenswrapper[19715]: I0313 12:50:47.325367 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" event={"ID":"572e278b-c463-49b0-a198-49bd9e2c288c","Type":"ContainerDied","Data":"08c8e93a77db246f3a012b8da27c0b68cd74a35397e793d6731a402bb45e9631"} Mar 13 12:50:47.326151 master-0 kubenswrapper[19715]: I0313 12:50:47.325762 19715 scope.go:117] "RemoveContainer" containerID="08c8e93a77db246f3a012b8da27c0b68cd74a35397e793d6731a402bb45e9631" Mar 13 12:50:47.329319 master-0 kubenswrapper[19715]: I0313 12:50:47.326745 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"787f8414-a607-4672-bf7f-6494b4250de1","Type":"ContainerStarted","Data":"a6e2536b371d826f4b6e1106d8a7c2512343398a962e2f9ddabfa67f445087eb"} Mar 13 12:50:47.329319 master-0 kubenswrapper[19715]: I0313 12:50:47.326783 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-ddjx7"] Mar 13 12:50:47.329319 master-0 kubenswrapper[19715]: I0313 12:50:47.327683 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.330168 master-0 kubenswrapper[19715]: I0313 12:50:47.329662 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:50:47.330168 master-0 kubenswrapper[19715]: I0313 12:50:47.330036 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:50:47.330937 master-0 kubenswrapper[19715]: I0313 12:50:47.330768 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:50:47.338621 master-0 kubenswrapper[19715]: I0313 12:50:47.331713 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc"] Mar 13 12:50:47.338621 master-0 kubenswrapper[19715]: I0313 12:50:47.335051 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ddjx7"] Mar 13 12:50:47.338621 master-0 kubenswrapper[19715]: I0313 12:50:47.337353 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" event={"ID":"5837d1cb-aa4e-465e-ac1a-92a775c89f6b","Type":"ContainerStarted","Data":"7fe83a71fc3cded82b11a1932d73e07ccc58211e8b419d796dc10c0ff287082d"} Mar 13 12:50:47.338621 master-0 kubenswrapper[19715]: I0313 12:50:47.337389 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" event={"ID":"5837d1cb-aa4e-465e-ac1a-92a775c89f6b","Type":"ContainerStarted","Data":"563a09c97b0cfec4a7249ddbd4da8712b97c975fea8ed4f8343737c52a2be7c0"} Mar 13 12:50:47.338621 master-0 kubenswrapper[19715]: I0313 12:50:47.337400 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" event={"ID":"5837d1cb-aa4e-465e-ac1a-92a775c89f6b","Type":"ContainerStarted","Data":"b3c1f099d894bb6a8575e55c34518935f94b526b3a8a20329d2f4c6bba8d5c6f"} Mar 13 12:50:47.371384 master-0 kubenswrapper[19715]: I0313 12:50:47.370612 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jf2nj\" (UniqueName: \"kubernetes.io/projected/cbd86a78-769d-4abc-b02d-48d52d9937c4-kube-api-access-jf2nj\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.371384 master-0 kubenswrapper[19715]: I0313 12:50:47.370669 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.371384 master-0 kubenswrapper[19715]: I0313 12:50:47.370710 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s8cz8\" (UniqueName: \"kubernetes.io/projected/9a42a89e-11dd-4a5d-a0b8-97b57e937f08-kube-api-access-s8cz8\") pod \"network-check-source-7c67b67d47-7gg49\" (UID: \"9a42a89e-11dd-4a5d-a0b8-97b57e937f08\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" Mar 13 12:50:47.389189 master-0 kubenswrapper[19715]: I0313 12:50:47.389134 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8cz8\" (UniqueName: \"kubernetes.io/projected/9a42a89e-11dd-4a5d-a0b8-97b57e937f08-kube-api-access-s8cz8\") pod \"network-check-source-7c67b67d47-7gg49\" (UID: \"9a42a89e-11dd-4a5d-a0b8-97b57e937f08\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" Mar 13 12:50:47.408339 master-0 kubenswrapper[19715]: I0313 12:50:47.406644 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-8gbgd" podStartSLOduration=2.406542499 podStartE2EDuration="2.406542499s" podCreationTimestamp="2026-03-13 12:50:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:47.404546347 +0000 UTC m=+73.971219114" watchObservedRunningTime="2026-03-13 12:50:47.406542499 +0000 UTC m=+73.973215256" Mar 13 12:50:47.471962 master-0 kubenswrapper[19715]: I0313 12:50:47.471876 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d2f4900f-4ee7-4879-a97c-c6443d0d9acc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-6w4fc\" (UID: \"d2f4900f-4ee7-4879-a97c-c6443d0d9acc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:47.472238 master-0 kubenswrapper[19715]: I0313 12:50:47.471972 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxc7\" (UniqueName: \"kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.472238 master-0 kubenswrapper[19715]: I0313 12:50:47.472009 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-stats-auth\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.472238 master-0 kubenswrapper[19715]: I0313 12:50:47.472073 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.472398 master-0 kubenswrapper[19715]: I0313 12:50:47.472309 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-default-certificate\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.472460 master-0 kubenswrapper[19715]: I0313 12:50:47.472407 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jf2nj\" (UniqueName: \"kubernetes.io/projected/cbd86a78-769d-4abc-b02d-48d52d9937c4-kube-api-access-jf2nj\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.472460 master-0 kubenswrapper[19715]: I0313 12:50:47.472433 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2xpq\" (UniqueName: \"kubernetes.io/projected/38ba3e49-717e-458d-bb3d-4acbd6d904bf-kube-api-access-d2xpq\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.472516 master-0 kubenswrapper[19715]: I0313 12:50:47.472464 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.472516 master-0 kubenswrapper[19715]: I0313 12:50:47.472495 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.472756 master-0 kubenswrapper[19715]: I0313 12:50:47.472537 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.472756 master-0 kubenswrapper[19715]: I0313 12:50:47.472643 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-metrics-certs\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.472756 master-0 kubenswrapper[19715]: I0313 12:50:47.472714 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba3e49-717e-458d-bb3d-4acbd6d904bf-service-ca-bundle\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.472937 master-0 kubenswrapper[19715]: E0313 12:50:47.472889 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:50:47.473022 master-0 kubenswrapper[19715]: E0313 12:50:47.472968 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:47.97294912 +0000 UTC m=+74.539621977 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:50:47.491461 master-0 kubenswrapper[19715]: I0313 12:50:47.491420 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jf2nj\" (UniqueName: \"kubernetes.io/projected/cbd86a78-769d-4abc-b02d-48d52d9937c4-kube-api-access-jf2nj\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.574603 master-0 kubenswrapper[19715]: I0313 12:50:47.574512 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2xpq\" (UniqueName: \"kubernetes.io/projected/38ba3e49-717e-458d-bb3d-4acbd6d904bf-kube-api-access-d2xpq\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.574603 master-0 kubenswrapper[19715]: I0313 12:50:47.574610 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574660 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574694 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-metrics-certs\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574739 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba3e49-717e-458d-bb3d-4acbd6d904bf-service-ca-bundle\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574774 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d2f4900f-4ee7-4879-a97c-c6443d0d9acc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-6w4fc\" (UID: \"d2f4900f-4ee7-4879-a97c-c6443d0d9acc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574831 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxc7\" (UniqueName: \"kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574869 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-stats-auth\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574906 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.575014 master-0 kubenswrapper[19715]: I0313 12:50:47.574933 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-default-certificate\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.576123 master-0 kubenswrapper[19715]: I0313 12:50:47.576067 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.576509 master-0 kubenswrapper[19715]: I0313 12:50:47.576295 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.577022 master-0 kubenswrapper[19715]: I0313 12:50:47.576979 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.577160 master-0 kubenswrapper[19715]: I0313 12:50:47.577122 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/38ba3e49-717e-458d-bb3d-4acbd6d904bf-service-ca-bundle\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.579125 master-0 kubenswrapper[19715]: I0313 12:50:47.579100 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-stats-auth\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.580104 master-0 kubenswrapper[19715]: I0313 12:50:47.579918 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-default-certificate\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.580329 master-0 kubenswrapper[19715]: I0313 12:50:47.580110 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/d2f4900f-4ee7-4879-a97c-c6443d0d9acc-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-6w4fc\" (UID: \"d2f4900f-4ee7-4879-a97c-c6443d0d9acc\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:47.584411 master-0 kubenswrapper[19715]: I0313 12:50:47.584372 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/38ba3e49-717e-458d-bb3d-4acbd6d904bf-metrics-certs\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.596052 master-0 kubenswrapper[19715]: I0313 12:50:47.595988 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2xpq\" (UniqueName: \"kubernetes.io/projected/38ba3e49-717e-458d-bb3d-4acbd6d904bf-kube-api-access-d2xpq\") pod \"router-default-79f8cd6fdd-q5h8k\" (UID: \"38ba3e49-717e-458d-bb3d-4acbd6d904bf\") " pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.605389 master-0 kubenswrapper[19715]: I0313 12:50:47.605351 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxc7\" (UniqueName: \"kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7\") pod \"cni-sysctl-allowlist-ds-l2xgj\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.609227 master-0 kubenswrapper[19715]: I0313 12:50:47.609155 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" Mar 13 12:50:47.640214 master-0 kubenswrapper[19715]: I0313 12:50:47.631323 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:47.686988 master-0 kubenswrapper[19715]: W0313 12:50:47.686858 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38ba3e49_717e_458d_bb3d_4acbd6d904bf.slice/crio-90341a7b3f12648c68dcf07b10e6a7cac04d12d1b6f4c8a8e3fad0ee99ff62cc WatchSource:0}: Error finding container 90341a7b3f12648c68dcf07b10e6a7cac04d12d1b6f4c8a8e3fad0ee99ff62cc: Status 404 returned error can't find the container with id 90341a7b3f12648c68dcf07b10e6a7cac04d12d1b6f4c8a8e3fad0ee99ff62cc Mar 13 12:50:47.746198 master-0 kubenswrapper[19715]: I0313 12:50:47.746088 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:47.783528 master-0 kubenswrapper[19715]: I0313 12:50:47.783460 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:47.996478 master-0 kubenswrapper[19715]: I0313 12:50:47.996408 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:47.997155 master-0 kubenswrapper[19715]: E0313 12:50:47.996674 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:50:47.997155 master-0 kubenswrapper[19715]: E0313 12:50:47.996729 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:48.996714306 +0000 UTC m=+75.563387063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:50:48.157784 master-0 kubenswrapper[19715]: I0313 12:50:48.156520 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49"] Mar 13 12:50:48.284240 master-0 kubenswrapper[19715]: I0313 12:50:48.284187 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc"] Mar 13 12:50:48.299369 master-0 kubenswrapper[19715]: W0313 12:50:48.299278 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd2f4900f_4ee7_4879_a97c_c6443d0d9acc.slice/crio-08cd224a8507cf5424d1117187613cc350c49cba74950660d3e66002b1d7a0aa WatchSource:0}: Error finding container 08cd224a8507cf5424d1117187613cc350c49cba74950660d3e66002b1d7a0aa: Status 404 returned error can't find the container with id 08cd224a8507cf5424d1117187613cc350c49cba74950660d3e66002b1d7a0aa Mar 13 12:50:48.346622 master-0 kubenswrapper[19715]: I0313 12:50:48.346532 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"787f8414-a607-4672-bf7f-6494b4250de1","Type":"ContainerStarted","Data":"2687ae6b014a5827eef79787820c82f3a426c8b755ef25f1712b51c3677d0ae1"} Mar 13 12:50:48.348558 master-0 kubenswrapper[19715]: I0313 12:50:48.348506 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" event={"ID":"9a42a89e-11dd-4a5d-a0b8-97b57e937f08","Type":"ContainerStarted","Data":"e54e3a14ec91ffd466572590a31b8c86797f86dd3fa00bb54da14fe445196b59"} Mar 13 12:50:48.348704 master-0 kubenswrapper[19715]: I0313 12:50:48.348570 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" event={"ID":"9a42a89e-11dd-4a5d-a0b8-97b57e937f08","Type":"ContainerStarted","Data":"3a205430fb2cebdc6cf51b184357c41c728fc605404d33fcf2b327204d7d50da"} Mar 13 12:50:48.350141 master-0 kubenswrapper[19715]: I0313 12:50:48.350080 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" event={"ID":"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd","Type":"ContainerStarted","Data":"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c"} Mar 13 12:50:48.350224 master-0 kubenswrapper[19715]: I0313 12:50:48.350144 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" event={"ID":"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd","Type":"ContainerStarted","Data":"b9a6d8d694b1ba6b438559e0e897eb4939b3a5692ebb952a3d8db17b6e0a3186"} Mar 13 12:50:48.350388 master-0 kubenswrapper[19715]: I0313 12:50:48.350347 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:48.351309 master-0 kubenswrapper[19715]: I0313 12:50:48.351277 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" event={"ID":"38ba3e49-717e-458d-bb3d-4acbd6d904bf","Type":"ContainerStarted","Data":"90341a7b3f12648c68dcf07b10e6a7cac04d12d1b6f4c8a8e3fad0ee99ff62cc"} Mar 13 12:50:48.352513 master-0 kubenswrapper[19715]: I0313 12:50:48.352489 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" event={"ID":"d2f4900f-4ee7-4879-a97c-c6443d0d9acc","Type":"ContainerStarted","Data":"08cd224a8507cf5424d1117187613cc350c49cba74950660d3e66002b1d7a0aa"} Mar 13 12:50:48.356355 master-0 kubenswrapper[19715]: I0313 12:50:48.356323 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/1.log" Mar 13 12:50:48.357144 master-0 kubenswrapper[19715]: I0313 12:50:48.357084 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/0.log" Mar 13 12:50:48.357225 master-0 kubenswrapper[19715]: I0313 12:50:48.357139 19715 generic.go:334] "Generic (PLEG): container finished" podID="572e278b-c463-49b0-a198-49bd9e2c288c" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" exitCode=255 Mar 13 12:50:48.357294 master-0 kubenswrapper[19715]: I0313 12:50:48.357241 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" event={"ID":"572e278b-c463-49b0-a198-49bd9e2c288c","Type":"ContainerDied","Data":"adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c"} Mar 13 12:50:48.357386 master-0 kubenswrapper[19715]: I0313 12:50:48.357363 19715 scope.go:117] "RemoveContainer" containerID="08c8e93a77db246f3a012b8da27c0b68cd74a35397e793d6731a402bb45e9631" Mar 13 12:50:48.358199 master-0 kubenswrapper[19715]: I0313 12:50:48.358123 19715 scope.go:117] "RemoveContainer" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" Mar 13 12:50:48.358413 master-0 kubenswrapper[19715]: E0313 12:50:48.358376 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:50:48.396696 master-0 kubenswrapper[19715]: I0313 12:50:48.392121 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=4.392099819 podStartE2EDuration="4.392099819s" podCreationTimestamp="2026-03-13 12:50:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:48.368969274 +0000 UTC m=+74.935642031" watchObservedRunningTime="2026-03-13 12:50:48.392099819 +0000 UTC m=+74.958772576" Mar 13 12:50:48.445839 master-0 kubenswrapper[19715]: I0313 12:50:48.445696 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" podStartSLOduration=1.445670018 podStartE2EDuration="1.445670018s" podCreationTimestamp="2026-03-13 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:48.445057778 +0000 UTC m=+75.011730545" watchObservedRunningTime="2026-03-13 12:50:48.445670018 +0000 UTC m=+75.012342785" Mar 13 12:50:48.467693 master-0 kubenswrapper[19715]: I0313 12:50:48.466688 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-7gg49" podStartSLOduration=775.466647825 podStartE2EDuration="12m55.466647825s" podCreationTimestamp="2026-03-13 12:37:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:50:48.464734385 +0000 UTC m=+75.031407142" watchObservedRunningTime="2026-03-13 12:50:48.466647825 +0000 UTC m=+75.033320592" Mar 13 12:50:49.013153 master-0 kubenswrapper[19715]: I0313 12:50:49.013081 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:49.014426 master-0 kubenswrapper[19715]: E0313 12:50:49.013298 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:50:49.014566 master-0 kubenswrapper[19715]: E0313 12:50:49.014463 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:51.014440415 +0000 UTC m=+77.581113162 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:50:49.397658 master-0 kubenswrapper[19715]: I0313 12:50:49.396979 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/1.log" Mar 13 12:50:49.398090 master-0 kubenswrapper[19715]: I0313 12:50:49.398057 19715 scope.go:117] "RemoveContainer" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" Mar 13 12:50:49.398271 master-0 kubenswrapper[19715]: E0313 12:50:49.398216 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:50:49.424358 master-0 kubenswrapper[19715]: I0313 12:50:49.424305 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:50:50.123684 master-0 kubenswrapper[19715]: I0313 12:50:50.123615 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l2xgj"] Mar 13 12:50:51.051087 master-0 kubenswrapper[19715]: I0313 12:50:51.050998 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:51.051384 master-0 kubenswrapper[19715]: E0313 12:50:51.051328 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:50:51.051384 master-0 kubenswrapper[19715]: E0313 12:50:51.051386 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:55.051368717 +0000 UTC m=+81.618041474 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:50:51.412442 master-0 kubenswrapper[19715]: I0313 12:50:51.412374 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" event={"ID":"d2f4900f-4ee7-4879-a97c-c6443d0d9acc","Type":"ContainerStarted","Data":"15ab65ef7d3bf17b8195a0b36dfec12ff83ad0d7a4e5c7c909b2ef8199f31bb8"} Mar 13 12:50:51.413113 master-0 kubenswrapper[19715]: I0313 12:50:51.412771 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:51.414421 master-0 kubenswrapper[19715]: I0313 12:50:51.414315 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" gracePeriod=30 Mar 13 12:50:51.415291 master-0 kubenswrapper[19715]: I0313 12:50:51.414645 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" event={"ID":"38ba3e49-717e-458d-bb3d-4acbd6d904bf","Type":"ContainerStarted","Data":"ff8725e9850adc07af363bb407e8f8dd2eaf355f7d88674287fea8e77d0eaf65"} Mar 13 12:50:51.418830 master-0 kubenswrapper[19715]: I0313 12:50:51.418383 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" Mar 13 12:50:51.446774 master-0 kubenswrapper[19715]: I0313 12:50:51.446684 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-6w4fc" podStartSLOduration=418.904181439 podStartE2EDuration="7m1.446664037s" podCreationTimestamp="2026-03-13 12:43:50 +0000 UTC" firstStartedPulling="2026-03-13 12:50:48.30280506 +0000 UTC m=+74.869477817" lastFinishedPulling="2026-03-13 12:50:50.845287658 +0000 UTC m=+77.411960415" observedRunningTime="2026-03-13 12:50:51.445075637 +0000 UTC m=+78.011748394" watchObservedRunningTime="2026-03-13 12:50:51.446664037 +0000 UTC m=+78.013336794" Mar 13 12:50:51.636656 master-0 kubenswrapper[19715]: I0313 12:50:51.632923 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:51.636656 master-0 kubenswrapper[19715]: I0313 12:50:51.636568 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:51.661766 master-0 kubenswrapper[19715]: I0313 12:50:51.661683 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" podStartSLOduration=641.505811853 podStartE2EDuration="10m44.661658305s" podCreationTimestamp="2026-03-13 12:40:07 +0000 UTC" firstStartedPulling="2026-03-13 12:50:47.689870849 +0000 UTC m=+74.256543616" lastFinishedPulling="2026-03-13 12:50:50.845717311 +0000 UTC m=+77.412390068" observedRunningTime="2026-03-13 12:50:51.495299501 +0000 UTC m=+78.061972268" watchObservedRunningTime="2026-03-13 12:50:51.661658305 +0000 UTC m=+78.228331052" Mar 13 12:50:51.664316 master-0 kubenswrapper[19715]: I0313 12:50:51.664202 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4"] Mar 13 12:50:51.665657 master-0 kubenswrapper[19715]: I0313 12:50:51.665619 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.672290 master-0 kubenswrapper[19715]: I0313 12:50:51.671624 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:50:51.672290 master-0 kubenswrapper[19715]: I0313 12:50:51.671711 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:50:51.672290 master-0 kubenswrapper[19715]: I0313 12:50:51.671851 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:50:51.672290 master-0 kubenswrapper[19715]: I0313 12:50:51.671645 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-lcdwj" Mar 13 12:50:51.693293 master-0 kubenswrapper[19715]: I0313 12:50:51.690229 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4"] Mar 13 12:50:51.764180 master-0 kubenswrapper[19715]: I0313 12:50:51.761216 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpgv8\" (UniqueName: \"kubernetes.io/projected/e79537b5-fbdf-419a-9148-da0433806c88-kube-api-access-xpgv8\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.764180 master-0 kubenswrapper[19715]: I0313 12:50:51.761273 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.764180 master-0 kubenswrapper[19715]: I0313 12:50:51.761331 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e79537b5-fbdf-419a-9148-da0433806c88-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.764180 master-0 kubenswrapper[19715]: I0313 12:50:51.761353 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.862807 master-0 kubenswrapper[19715]: I0313 12:50:51.862728 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpgv8\" (UniqueName: \"kubernetes.io/projected/e79537b5-fbdf-419a-9148-da0433806c88-kube-api-access-xpgv8\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.863043 master-0 kubenswrapper[19715]: I0313 12:50:51.862943 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.863116 master-0 kubenswrapper[19715]: I0313 12:50:51.863030 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e79537b5-fbdf-419a-9148-da0433806c88-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.863116 master-0 kubenswrapper[19715]: E0313 12:50:51.863044 19715 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 12:50:51.863201 master-0 kubenswrapper[19715]: E0313 12:50:51.863138 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls podName:e79537b5-fbdf-419a-9148-da0433806c88 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:52.36311774 +0000 UTC m=+78.929790547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-wb9b4" (UID: "e79537b5-fbdf-419a-9148-da0433806c88") : secret "prometheus-operator-tls" not found Mar 13 12:50:51.863201 master-0 kubenswrapper[19715]: I0313 12:50:51.863075 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.864400 master-0 kubenswrapper[19715]: I0313 12:50:51.864359 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/e79537b5-fbdf-419a-9148-da0433806c88-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.867677 master-0 kubenswrapper[19715]: I0313 12:50:51.867636 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:51.879123 master-0 kubenswrapper[19715]: I0313 12:50:51.879073 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpgv8\" (UniqueName: \"kubernetes.io/projected/e79537b5-fbdf-419a-9148-da0433806c88-kube-api-access-xpgv8\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:52.370490 master-0 kubenswrapper[19715]: I0313 12:50:52.370403 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:52.370784 master-0 kubenswrapper[19715]: E0313 12:50:52.370643 19715 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 12:50:52.370784 master-0 kubenswrapper[19715]: E0313 12:50:52.370747 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls podName:e79537b5-fbdf-419a-9148-da0433806c88 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:53.37072294 +0000 UTC m=+79.937395697 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-wb9b4" (UID: "e79537b5-fbdf-419a-9148-da0433806c88") : secret "prometheus-operator-tls" not found Mar 13 12:50:52.427431 master-0 kubenswrapper[19715]: I0313 12:50:52.425672 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:52.442019 master-0 kubenswrapper[19715]: I0313 12:50:52.441947 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-q5h8k" Mar 13 12:50:52.482960 master-0 kubenswrapper[19715]: I0313 12:50:52.482905 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-kmnl4"] Mar 13 12:50:52.483772 master-0 kubenswrapper[19715]: I0313 12:50:52.483753 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.488253 master-0 kubenswrapper[19715]: I0313 12:50:52.487996 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 12:50:52.488253 master-0 kubenswrapper[19715]: I0313 12:50:52.488022 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5s2w7" Mar 13 12:50:52.575301 master-0 kubenswrapper[19715]: I0313 12:50:52.575237 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-host\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.575519 master-0 kubenswrapper[19715]: I0313 12:50:52.575315 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9fzx\" (UniqueName: \"kubernetes.io/projected/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-kube-api-access-d9fzx\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.575519 master-0 kubenswrapper[19715]: I0313 12:50:52.575375 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-serviceca\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.676558 master-0 kubenswrapper[19715]: I0313 12:50:52.676331 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-host\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.676558 master-0 kubenswrapper[19715]: I0313 12:50:52.676480 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-host\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.676895 master-0 kubenswrapper[19715]: I0313 12:50:52.676614 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9fzx\" (UniqueName: \"kubernetes.io/projected/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-kube-api-access-d9fzx\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.676895 master-0 kubenswrapper[19715]: I0313 12:50:52.676764 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-serviceca\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.677543 master-0 kubenswrapper[19715]: I0313 12:50:52.677509 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-serviceca\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.695424 master-0 kubenswrapper[19715]: I0313 12:50:52.695366 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9fzx\" (UniqueName: \"kubernetes.io/projected/d09c1267-3853-4ddf-8b98-2c0d8b7c845c-kube-api-access-d9fzx\") pod \"node-ca-kmnl4\" (UID: \"d09c1267-3853-4ddf-8b98-2c0d8b7c845c\") " pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.812309 master-0 kubenswrapper[19715]: I0313 12:50:52.812137 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-kmnl4" Mar 13 12:50:52.834031 master-0 kubenswrapper[19715]: W0313 12:50:52.833964 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd09c1267_3853_4ddf_8b98_2c0d8b7c845c.slice/crio-75272edd74ef69d7da2e94294234674d2ceb143e5f7c47ab0d35666a3a46fdad WatchSource:0}: Error finding container 75272edd74ef69d7da2e94294234674d2ceb143e5f7c47ab0d35666a3a46fdad: Status 404 returned error can't find the container with id 75272edd74ef69d7da2e94294234674d2ceb143e5f7c47ab0d35666a3a46fdad Mar 13 12:50:53.044357 master-0 kubenswrapper[19715]: I0313 12:50:53.044207 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:53.044357 master-0 kubenswrapper[19715]: I0313 12:50:53.044289 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:50:53.044990 master-0 kubenswrapper[19715]: I0313 12:50:53.044964 19715 scope.go:117] "RemoveContainer" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" Mar 13 12:50:53.045209 master-0 kubenswrapper[19715]: E0313 12:50:53.045178 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:50:53.390737 master-0 kubenswrapper[19715]: I0313 12:50:53.390022 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:53.390737 master-0 kubenswrapper[19715]: E0313 12:50:53.390320 19715 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 12:50:53.390737 master-0 kubenswrapper[19715]: E0313 12:50:53.390421 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls podName:e79537b5-fbdf-419a-9148-da0433806c88 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:55.390399478 +0000 UTC m=+81.957072235 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-wb9b4" (UID: "e79537b5-fbdf-419a-9148-da0433806c88") : secret "prometheus-operator-tls" not found Mar 13 12:50:53.432329 master-0 kubenswrapper[19715]: I0313 12:50:53.432258 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kmnl4" event={"ID":"d09c1267-3853-4ddf-8b98-2c0d8b7c845c","Type":"ContainerStarted","Data":"75272edd74ef69d7da2e94294234674d2ceb143e5f7c47ab0d35666a3a46fdad"} Mar 13 12:50:55.114942 master-0 kubenswrapper[19715]: I0313 12:50:55.114794 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:50:55.115522 master-0 kubenswrapper[19715]: E0313 12:50:55.115054 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:50:55.115522 master-0 kubenswrapper[19715]: E0313 12:50:55.115115 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:51:03.115097525 +0000 UTC m=+89.681770282 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:50:55.423039 master-0 kubenswrapper[19715]: I0313 12:50:55.422952 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:55.423354 master-0 kubenswrapper[19715]: E0313 12:50:55.423151 19715 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 12:50:55.423354 master-0 kubenswrapper[19715]: E0313 12:50:55.423243 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls podName:e79537b5-fbdf-419a-9148-da0433806c88 nodeName:}" failed. No retries permitted until 2026-03-13 12:50:59.423218832 +0000 UTC m=+85.989891629 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-wb9b4" (UID: "e79537b5-fbdf-419a-9148-da0433806c88") : secret "prometheus-operator-tls" not found Mar 13 12:50:55.453987 master-0 kubenswrapper[19715]: I0313 12:50:55.453914 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-kmnl4" event={"ID":"d09c1267-3853-4ddf-8b98-2c0d8b7c845c","Type":"ContainerStarted","Data":"80cb77a41331c75b14b6b43f5bb3be9839e3fc1819e528c192b42480e6e2fad2"} Mar 13 12:50:55.473665 master-0 kubenswrapper[19715]: I0313 12:50:55.473510 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-kmnl4" podStartSLOduration=1.454391655 podStartE2EDuration="3.473482428s" podCreationTimestamp="2026-03-13 12:50:52 +0000 UTC" firstStartedPulling="2026-03-13 12:50:52.835635751 +0000 UTC m=+79.402308498" lastFinishedPulling="2026-03-13 12:50:54.854726514 +0000 UTC m=+81.421399271" observedRunningTime="2026-03-13 12:50:55.47321101 +0000 UTC m=+82.039883787" watchObservedRunningTime="2026-03-13 12:50:55.473482428 +0000 UTC m=+82.040155185" Mar 13 12:50:55.718497 master-0 kubenswrapper[19715]: I0313 12:50:55.718344 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:50:55.718497 master-0 kubenswrapper[19715]: I0313 12:50:55.718437 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:50:55.718868 master-0 kubenswrapper[19715]: I0313 12:50:55.718507 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:50:55.719264 master-0 kubenswrapper[19715]: I0313 12:50:55.719221 19715 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b"} pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:50:55.719335 master-0 kubenswrapper[19715]: I0313 12:50:55.719292 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" containerID="cri-o://059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b" gracePeriod=600 Mar 13 12:50:56.474230 master-0 kubenswrapper[19715]: I0313 12:50:56.474148 19715 generic.go:334] "Generic (PLEG): container finished" podID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerID="059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b" exitCode=0 Mar 13 12:50:56.474917 master-0 kubenswrapper[19715]: I0313 12:50:56.474250 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerDied","Data":"059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b"} Mar 13 12:50:56.474917 master-0 kubenswrapper[19715]: I0313 12:50:56.474319 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed"} Mar 13 12:50:56.474917 master-0 kubenswrapper[19715]: I0313 12:50:56.474345 19715 scope.go:117] "RemoveContainer" containerID="eab5e29eedcb24ff8a4205f7bf62bee3cde077c035b42cc119aefb133323f99c" Mar 13 12:50:57.892938 master-0 kubenswrapper[19715]: E0313 12:50:57.892757 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:50:57.897381 master-0 kubenswrapper[19715]: E0313 12:50:57.894515 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:50:57.898184 master-0 kubenswrapper[19715]: E0313 12:50:57.898116 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:50:57.898248 master-0 kubenswrapper[19715]: E0313 12:50:57.898199 19715 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:50:59.522613 master-0 kubenswrapper[19715]: I0313 12:50:59.522501 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:50:59.524257 master-0 kubenswrapper[19715]: E0313 12:50:59.522936 19715 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 12:50:59.524257 master-0 kubenswrapper[19715]: E0313 12:50:59.523151 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls podName:e79537b5-fbdf-419a-9148-da0433806c88 nodeName:}" failed. No retries permitted until 2026-03-13 12:51:07.523085479 +0000 UTC m=+94.089758236 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-wb9b4" (UID: "e79537b5-fbdf-419a-9148-da0433806c88") : secret "prometheus-operator-tls" not found Mar 13 12:51:01.571606 master-0 kubenswrapper[19715]: I0313 12:51:01.571539 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_525e96d8-c2f8-43d4-9ab0-9077604d3b13/installer/0.log" Mar 13 12:51:01.572286 master-0 kubenswrapper[19715]: I0313 12:51:01.571642 19715 generic.go:334] "Generic (PLEG): container finished" podID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" containerID="3ea05128587e284d92ee0b905fc3c850bb6638d76cc0a940ca8662cb8a1e2d30" exitCode=1 Mar 13 12:51:01.572286 master-0 kubenswrapper[19715]: I0313 12:51:01.571758 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"525e96d8-c2f8-43d4-9ab0-9077604d3b13","Type":"ContainerDied","Data":"3ea05128587e284d92ee0b905fc3c850bb6638d76cc0a940ca8662cb8a1e2d30"} Mar 13 12:51:01.574776 master-0 kubenswrapper[19715]: I0313 12:51:01.574739 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-pbgd4_4f942fce-07a9-4377-8330-c6249a5a8b24/multus-admission-controller/0.log" Mar 13 12:51:01.574896 master-0 kubenswrapper[19715]: I0313 12:51:01.574791 19715 generic.go:334] "Generic (PLEG): container finished" podID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerID="a77c66d0bbef5ac4ba841e64d029a75b81101530693d755adf73cb234d47aa31" exitCode=137 Mar 13 12:51:01.574896 master-0 kubenswrapper[19715]: I0313 12:51:01.574839 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerDied","Data":"a77c66d0bbef5ac4ba841e64d029a75b81101530693d755adf73cb234d47aa31"} Mar 13 12:51:02.114468 master-0 kubenswrapper[19715]: I0313 12:51:02.114405 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-pbgd4_4f942fce-07a9-4377-8330-c6249a5a8b24/multus-admission-controller/0.log" Mar 13 12:51:02.114803 master-0 kubenswrapper[19715]: I0313 12:51:02.114556 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:51:02.279856 master-0 kubenswrapper[19715]: I0313 12:51:02.279788 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_525e96d8-c2f8-43d4-9ab0-9077604d3b13/installer/0.log" Mar 13 12:51:02.280146 master-0 kubenswrapper[19715]: I0313 12:51:02.279919 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:51:02.291981 master-0 kubenswrapper[19715]: I0313 12:51:02.291915 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") pod \"4f942fce-07a9-4377-8330-c6249a5a8b24\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " Mar 13 12:51:02.292139 master-0 kubenswrapper[19715]: I0313 12:51:02.291997 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock\") pod \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " Mar 13 12:51:02.292139 master-0 kubenswrapper[19715]: I0313 12:51:02.292036 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") pod \"4f942fce-07a9-4377-8330-c6249a5a8b24\" (UID: \"4f942fce-07a9-4377-8330-c6249a5a8b24\") " Mar 13 12:51:02.292257 master-0 kubenswrapper[19715]: I0313 12:51:02.292216 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock" (OuterVolumeSpecName: "var-lock") pod "525e96d8-c2f8-43d4-9ab0-9077604d3b13" (UID: "525e96d8-c2f8-43d4-9ab0-9077604d3b13"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:02.292317 master-0 kubenswrapper[19715]: I0313 12:51:02.292280 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access\") pod \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " Mar 13 12:51:02.292588 master-0 kubenswrapper[19715]: I0313 12:51:02.292543 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:02.301361 master-0 kubenswrapper[19715]: I0313 12:51:02.301284 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "4f942fce-07a9-4377-8330-c6249a5a8b24" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:51:02.301736 master-0 kubenswrapper[19715]: I0313 12:51:02.301655 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "525e96d8-c2f8-43d4-9ab0-9077604d3b13" (UID: "525e96d8-c2f8-43d4-9ab0-9077604d3b13"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:02.303316 master-0 kubenswrapper[19715]: I0313 12:51:02.303286 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb" (OuterVolumeSpecName: "kube-api-access-7s2cb") pod "4f942fce-07a9-4377-8330-c6249a5a8b24" (UID: "4f942fce-07a9-4377-8330-c6249a5a8b24"). InnerVolumeSpecName "kube-api-access-7s2cb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:02.393218 master-0 kubenswrapper[19715]: I0313 12:51:02.393147 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir\") pod \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\" (UID: \"525e96d8-c2f8-43d4-9ab0-9077604d3b13\") " Mar 13 12:51:02.393492 master-0 kubenswrapper[19715]: I0313 12:51:02.393424 19715 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/4f942fce-07a9-4377-8330-c6249a5a8b24-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:02.393492 master-0 kubenswrapper[19715]: I0313 12:51:02.393439 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7s2cb\" (UniqueName: \"kubernetes.io/projected/4f942fce-07a9-4377-8330-c6249a5a8b24-kube-api-access-7s2cb\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:02.393492 master-0 kubenswrapper[19715]: I0313 12:51:02.393465 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:02.393703 master-0 kubenswrapper[19715]: I0313 12:51:02.393520 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "525e96d8-c2f8-43d4-9ab0-9077604d3b13" (UID: "525e96d8-c2f8-43d4-9ab0-9077604d3b13"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:02.429433 master-0 kubenswrapper[19715]: I0313 12:51:02.429255 19715 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 12:51:02.494148 master-0 kubenswrapper[19715]: I0313 12:51:02.494036 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/525e96d8-c2f8-43d4-9ab0-9077604d3b13-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:02.584676 master-0 kubenswrapper[19715]: I0313 12:51:02.584632 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-pbgd4_4f942fce-07a9-4377-8330-c6249a5a8b24/multus-admission-controller/0.log" Mar 13 12:51:02.585354 master-0 kubenswrapper[19715]: I0313 12:51:02.584748 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" event={"ID":"4f942fce-07a9-4377-8330-c6249a5a8b24","Type":"ContainerDied","Data":"310bb063b58a9159851ef88dd90cde60bf53039832d7c07feba8d470bdfa8768"} Mar 13 12:51:02.585354 master-0 kubenswrapper[19715]: I0313 12:51:02.584786 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-pbgd4" Mar 13 12:51:02.585354 master-0 kubenswrapper[19715]: I0313 12:51:02.585228 19715 scope.go:117] "RemoveContainer" containerID="5c411b542b6c604fb634e20ec1667bd444b32f47270e3ec6baff792160a18f75" Mar 13 12:51:02.590969 master-0 kubenswrapper[19715]: I0313 12:51:02.590927 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_525e96d8-c2f8-43d4-9ab0-9077604d3b13/installer/0.log" Mar 13 12:51:02.591101 master-0 kubenswrapper[19715]: I0313 12:51:02.591005 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"525e96d8-c2f8-43d4-9ab0-9077604d3b13","Type":"ContainerDied","Data":"8c1ea5dbebfa541e8e11f166e99cd7d351057304a571bdb2effbb5df6901ccb1"} Mar 13 12:51:02.591177 master-0 kubenswrapper[19715]: I0313 12:51:02.591098 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 12:51:02.617172 master-0 kubenswrapper[19715]: I0313 12:51:02.616535 19715 scope.go:117] "RemoveContainer" containerID="a77c66d0bbef5ac4ba841e64d029a75b81101530693d755adf73cb234d47aa31" Mar 13 12:51:02.631931 master-0 kubenswrapper[19715]: I0313 12:51:02.631841 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:51:02.642691 master-0 kubenswrapper[19715]: I0313 12:51:02.639074 19715 scope.go:117] "RemoveContainer" containerID="3ea05128587e284d92ee0b905fc3c850bb6638d76cc0a940ca8662cb8a1e2d30" Mar 13 12:51:02.649701 master-0 kubenswrapper[19715]: I0313 12:51:02.649627 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-pbgd4"] Mar 13 12:51:02.661889 master-0 kubenswrapper[19715]: I0313 12:51:02.661808 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:51:02.670564 master-0 kubenswrapper[19715]: I0313 12:51:02.670512 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 12:51:03.202436 master-0 kubenswrapper[19715]: I0313 12:51:03.202378 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:51:03.202738 master-0 kubenswrapper[19715]: E0313 12:51:03.202568 19715 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 12:51:03.202738 master-0 kubenswrapper[19715]: E0313 12:51:03.202720 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert podName:cbd86a78-769d-4abc-b02d-48d52d9937c4 nodeName:}" failed. No retries permitted until 2026-03-13 12:51:19.202688002 +0000 UTC m=+105.769360769 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert") pod "ingress-canary-ddjx7" (UID: "cbd86a78-769d-4abc-b02d-48d52d9937c4") : secret "canary-serving-cert" not found Mar 13 12:51:03.704494 master-0 kubenswrapper[19715]: I0313 12:51:03.704445 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" path="/var/lib/kubelet/pods/4f942fce-07a9-4377-8330-c6249a5a8b24/volumes" Mar 13 12:51:03.705248 master-0 kubenswrapper[19715]: I0313 12:51:03.705205 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" path="/var/lib/kubelet/pods/525e96d8-c2f8-43d4-9ab0-9077604d3b13/volumes" Mar 13 12:51:04.874531 master-0 kubenswrapper[19715]: I0313 12:51:04.874455 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-p4jfp"] Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: E0313 12:51:04.874900 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="kube-rbac-proxy" Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: I0313 12:51:04.874946 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="kube-rbac-proxy" Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: E0313 12:51:04.874977 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="multus-admission-controller" Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: I0313 12:51:04.874987 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="multus-admission-controller" Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: E0313 12:51:04.875003 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" containerName="installer" Mar 13 12:51:04.875198 master-0 kubenswrapper[19715]: I0313 12:51:04.875012 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" containerName="installer" Mar 13 12:51:04.875622 master-0 kubenswrapper[19715]: I0313 12:51:04.875203 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="525e96d8-c2f8-43d4-9ab0-9077604d3b13" containerName="installer" Mar 13 12:51:04.875622 master-0 kubenswrapper[19715]: I0313 12:51:04.875250 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="kube-rbac-proxy" Mar 13 12:51:04.875622 master-0 kubenswrapper[19715]: I0313 12:51:04.875264 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f942fce-07a9-4377-8330-c6249a5a8b24" containerName="multus-admission-controller" Mar 13 12:51:04.876153 master-0 kubenswrapper[19715]: I0313 12:51:04.876096 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.878617 master-0 kubenswrapper[19715]: I0313 12:51:04.878544 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-89sxl" Mar 13 12:51:04.878777 master-0 kubenswrapper[19715]: I0313 12:51:04.878731 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:51:04.879262 master-0 kubenswrapper[19715]: I0313 12:51:04.879227 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:51:04.891193 master-0 kubenswrapper[19715]: I0313 12:51:04.891138 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-node-bootstrap-token\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.892013 master-0 kubenswrapper[19715]: I0313 12:51:04.891963 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lsl8g\" (UniqueName: \"kubernetes.io/projected/84286047-5d0d-4313-b85b-0810b9b89080-kube-api-access-lsl8g\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.892348 master-0 kubenswrapper[19715]: I0313 12:51:04.892320 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-certs\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.994136 master-0 kubenswrapper[19715]: I0313 12:51:04.994058 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lsl8g\" (UniqueName: \"kubernetes.io/projected/84286047-5d0d-4313-b85b-0810b9b89080-kube-api-access-lsl8g\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.994136 master-0 kubenswrapper[19715]: I0313 12:51:04.994159 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-certs\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.994510 master-0 kubenswrapper[19715]: I0313 12:51:04.994215 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-node-bootstrap-token\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:04.998962 master-0 kubenswrapper[19715]: I0313 12:51:04.998716 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-node-bootstrap-token\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:05.154924 master-0 kubenswrapper[19715]: I0313 12:51:05.145633 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/84286047-5d0d-4313-b85b-0810b9b89080-certs\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:05.165171 master-0 kubenswrapper[19715]: I0313 12:51:05.165118 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lsl8g\" (UniqueName: \"kubernetes.io/projected/84286047-5d0d-4313-b85b-0810b9b89080-kube-api-access-lsl8g\") pod \"machine-config-server-p4jfp\" (UID: \"84286047-5d0d-4313-b85b-0810b9b89080\") " pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:05.197078 master-0 kubenswrapper[19715]: I0313 12:51:05.197017 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-p4jfp" Mar 13 12:51:05.614681 master-0 kubenswrapper[19715]: I0313 12:51:05.614625 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-p4jfp" event={"ID":"84286047-5d0d-4313-b85b-0810b9b89080","Type":"ContainerStarted","Data":"0439ee7273973c545e63f7e747189f97dc392f232f5d5098fb570c44bee860f5"} Mar 13 12:51:05.614681 master-0 kubenswrapper[19715]: I0313 12:51:05.614680 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-p4jfp" event={"ID":"84286047-5d0d-4313-b85b-0810b9b89080","Type":"ContainerStarted","Data":"4703f9cb58f74ff03018c478954f2214dd17deecc816615f1ce9fb6087ea84b7"} Mar 13 12:51:05.649347 master-0 kubenswrapper[19715]: I0313 12:51:05.646541 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-p4jfp" podStartSLOduration=1.6465089480000001 podStartE2EDuration="1.646508948s" podCreationTimestamp="2026-03-13 12:51:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:05.645724723 +0000 UTC m=+92.212397490" watchObservedRunningTime="2026-03-13 12:51:05.646508948 +0000 UTC m=+92.213181705" Mar 13 12:51:05.698669 master-0 kubenswrapper[19715]: I0313 12:51:05.697472 19715 scope.go:117] "RemoveContainer" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" Mar 13 12:51:06.626297 master-0 kubenswrapper[19715]: I0313 12:51:06.626218 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/2.log" Mar 13 12:51:06.626967 master-0 kubenswrapper[19715]: I0313 12:51:06.626802 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/1.log" Mar 13 12:51:06.626967 master-0 kubenswrapper[19715]: I0313 12:51:06.626870 19715 generic.go:334] "Generic (PLEG): container finished" podID="572e278b-c463-49b0-a198-49bd9e2c288c" containerID="e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9" exitCode=255 Mar 13 12:51:06.626967 master-0 kubenswrapper[19715]: I0313 12:51:06.626910 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" event={"ID":"572e278b-c463-49b0-a198-49bd9e2c288c","Type":"ContainerDied","Data":"e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9"} Mar 13 12:51:06.627079 master-0 kubenswrapper[19715]: I0313 12:51:06.626986 19715 scope.go:117] "RemoveContainer" containerID="adcc2ae99c74d89c14a029438416cc6d981420d2d8f4442940afaa840b24846c" Mar 13 12:51:06.627585 master-0 kubenswrapper[19715]: I0313 12:51:06.627551 19715 scope.go:117] "RemoveContainer" containerID="e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9" Mar 13 12:51:06.627846 master-0 kubenswrapper[19715]: E0313 12:51:06.627819 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:51:07.546456 master-0 kubenswrapper[19715]: I0313 12:51:07.546378 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:51:07.551643 master-0 kubenswrapper[19715]: I0313 12:51:07.550292 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/e79537b5-fbdf-419a-9148-da0433806c88-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-wb9b4\" (UID: \"e79537b5-fbdf-419a-9148-da0433806c88\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:51:07.598390 master-0 kubenswrapper[19715]: I0313 12:51:07.597698 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" Mar 13 12:51:07.636522 master-0 kubenswrapper[19715]: I0313 12:51:07.636455 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/2.log" Mar 13 12:51:07.963530 master-0 kubenswrapper[19715]: E0313 12:51:07.962876 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:07.976632 master-0 kubenswrapper[19715]: E0313 12:51:07.976380 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:07.994613 master-0 kubenswrapper[19715]: E0313 12:51:07.991597 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:07.994613 master-0 kubenswrapper[19715]: E0313 12:51:07.991720 19715 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:08.250650 master-0 kubenswrapper[19715]: I0313 12:51:08.248762 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4"] Mar 13 12:51:08.254109 master-0 kubenswrapper[19715]: W0313 12:51:08.253829 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode79537b5_fbdf_419a_9148_da0433806c88.slice/crio-cc7d69a2d682a6814fcedd434e864a457b9d9008e19553214f3f04007e42b9c7 WatchSource:0}: Error finding container cc7d69a2d682a6814fcedd434e864a457b9d9008e19553214f3f04007e42b9c7: Status 404 returned error can't find the container with id cc7d69a2d682a6814fcedd434e864a457b9d9008e19553214f3f04007e42b9c7 Mar 13 12:51:08.644743 master-0 kubenswrapper[19715]: I0313 12:51:08.644647 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" event={"ID":"e79537b5-fbdf-419a-9148-da0433806c88","Type":"ContainerStarted","Data":"cc7d69a2d682a6814fcedd434e864a457b9d9008e19553214f3f04007e42b9c7"} Mar 13 12:51:10.708908 master-0 kubenswrapper[19715]: I0313 12:51:10.708812 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" event={"ID":"e79537b5-fbdf-419a-9148-da0433806c88","Type":"ContainerStarted","Data":"dc5c4de967ca9daf92bf4d503dd127b15b4a357bb40aeb6507b0b1bae1296f3e"} Mar 13 12:51:10.708908 master-0 kubenswrapper[19715]: I0313 12:51:10.708886 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" event={"ID":"e79537b5-fbdf-419a-9148-da0433806c88","Type":"ContainerStarted","Data":"ae39d6c210cc1cf0ddeaaaedafb6af8cc2ecf81958eb8189270390f8f0ff78b2"} Mar 13 12:51:10.743233 master-0 kubenswrapper[19715]: I0313 12:51:10.743130 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-wb9b4" podStartSLOduration=17.684964101 podStartE2EDuration="19.743078868s" podCreationTimestamp="2026-03-13 12:50:51 +0000 UTC" firstStartedPulling="2026-03-13 12:51:08.256324846 +0000 UTC m=+94.822997603" lastFinishedPulling="2026-03-13 12:51:10.314439613 +0000 UTC m=+96.881112370" observedRunningTime="2026-03-13 12:51:10.739260479 +0000 UTC m=+97.305933266" watchObservedRunningTime="2026-03-13 12:51:10.743078868 +0000 UTC m=+97.309751635" Mar 13 12:51:10.808797 master-0 kubenswrapper[19715]: I0313 12:51:10.808699 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-svrbk"] Mar 13 12:51:10.809651 master-0 kubenswrapper[19715]: I0313 12:51:10.809615 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:10.813161 master-0 kubenswrapper[19715]: I0313 12:51:10.812662 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 12:51:10.813161 master-0 kubenswrapper[19715]: I0313 12:51:10.812962 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8qwx8" Mar 13 12:51:10.825907 master-0 kubenswrapper[19715]: I0313 12:51:10.825816 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 12:51:10.834912 master-0 kubenswrapper[19715]: I0313 12:51:10.834776 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-svrbk"] Mar 13 12:51:10.903658 master-0 kubenswrapper[19715]: I0313 12:51:10.903568 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:10.903658 master-0 kubenswrapper[19715]: I0313 12:51:10.903662 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5e94a785-5cca-4645-b97d-7c4caf0c6c42-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.007844 master-0 kubenswrapper[19715]: I0313 12:51:11.004821 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.007844 master-0 kubenswrapper[19715]: I0313 12:51:11.004897 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5e94a785-5cca-4645-b97d-7c4caf0c6c42-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.007844 master-0 kubenswrapper[19715]: E0313 12:51:11.005036 19715 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: secret "networking-console-plugin-cert" not found Mar 13 12:51:11.007844 master-0 kubenswrapper[19715]: E0313 12:51:11.005159 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert podName:5e94a785-5cca-4645-b97d-7c4caf0c6c42 nodeName:}" failed. No retries permitted until 2026-03-13 12:51:11.505116301 +0000 UTC m=+98.071789058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert") pod "networking-console-plugin-5cbd49d755-svrbk" (UID: "5e94a785-5cca-4645-b97d-7c4caf0c6c42") : secret "networking-console-plugin-cert" not found Mar 13 12:51:11.007844 master-0 kubenswrapper[19715]: I0313 12:51:11.006013 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5e94a785-5cca-4645-b97d-7c4caf0c6c42-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.511273 master-0 kubenswrapper[19715]: I0313 12:51:11.511204 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.514919 master-0 kubenswrapper[19715]: I0313 12:51:11.514857 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5e94a785-5cca-4645-b97d-7c4caf0c6c42-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-svrbk\" (UID: \"5e94a785-5cca-4645-b97d-7c4caf0c6c42\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:11.730169 master-0 kubenswrapper[19715]: I0313 12:51:11.728912 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" Mar 13 12:51:12.172719 master-0 kubenswrapper[19715]: I0313 12:51:12.172643 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-svrbk"] Mar 13 12:51:12.178501 master-0 kubenswrapper[19715]: W0313 12:51:12.178421 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e94a785_5cca_4645_b97d_7c4caf0c6c42.slice/crio-8299df2deced8337aab3b38958142d66e6916e2611b099dc4fcec891ed61ec93 WatchSource:0}: Error finding container 8299df2deced8337aab3b38958142d66e6916e2611b099dc4fcec891ed61ec93: Status 404 returned error can't find the container with id 8299df2deced8337aab3b38958142d66e6916e2611b099dc4fcec891ed61ec93 Mar 13 12:51:12.737618 master-0 kubenswrapper[19715]: I0313 12:51:12.736149 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" event={"ID":"5e94a785-5cca-4645-b97d-7c4caf0c6c42","Type":"ContainerStarted","Data":"8299df2deced8337aab3b38958142d66e6916e2611b099dc4fcec891ed61ec93"} Mar 13 12:51:13.047706 master-0 kubenswrapper[19715]: I0313 12:51:13.044250 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:51:13.047706 master-0 kubenswrapper[19715]: I0313 12:51:13.045003 19715 scope.go:117] "RemoveContainer" containerID="e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9" Mar 13 12:51:13.047706 master-0 kubenswrapper[19715]: I0313 12:51:13.045117 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:51:13.047706 master-0 kubenswrapper[19715]: E0313 12:51:13.045288 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:51:13.303243 master-0 kubenswrapper[19715]: I0313 12:51:13.303125 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-djlbx"] Mar 13 12:51:13.311053 master-0 kubenswrapper[19715]: I0313 12:51:13.310987 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.333156 master-0 kubenswrapper[19715]: I0313 12:51:13.332848 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2"] Mar 13 12:51:13.334206 master-0 kubenswrapper[19715]: I0313 12:51:13.334147 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.335468 master-0 kubenswrapper[19715]: I0313 12:51:13.335326 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmg42" Mar 13 12:51:13.335701 master-0 kubenswrapper[19715]: I0313 12:51:13.335487 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:51:13.341102 master-0 kubenswrapper[19715]: I0313 12:51:13.339090 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:51:13.341102 master-0 kubenswrapper[19715]: I0313 12:51:13.339372 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:51:13.341102 master-0 kubenswrapper[19715]: I0313 12:51:13.339585 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mwrx7" Mar 13 12:51:13.341102 master-0 kubenswrapper[19715]: I0313 12:51:13.339817 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373299 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-wtmp\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373390 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5lmb\" (UniqueName: \"kubernetes.io/projected/74f20dbd-f800-4aab-8263-1bc2395c8123-kube-api-access-l5lmb\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373434 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-textfile\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373470 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-tls\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373500 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373546 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-sys\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373581 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6db3e185-395c-4d94-82a0-fb14978f626d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373651 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbh8q\" (UniqueName: \"kubernetes.io/projected/6db3e185-395c-4d94-82a0-fb14978f626d-kube-api-access-wbh8q\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373752 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373781 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-root\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373808 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.374319 master-0 kubenswrapper[19715]: I0313 12:51:13.373851 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74f20dbd-f800-4aab-8263-1bc2395c8123-metrics-client-ca\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.410272 master-0 kubenswrapper[19715]: I0313 12:51:13.408599 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2"] Mar 13 12:51:13.464644 master-0 kubenswrapper[19715]: I0313 12:51:13.463741 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x"] Mar 13 12:51:13.486640 master-0 kubenswrapper[19715]: I0313 12:51:13.484399 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.498674 master-0 kubenswrapper[19715]: I0313 12:51:13.490453 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6db3e185-395c-4d94-82a0-fb14978f626d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.498674 master-0 kubenswrapper[19715]: I0313 12:51:13.490541 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbh8q\" (UniqueName: \"kubernetes.io/projected/6db3e185-395c-4d94-82a0-fb14978f626d-kube-api-access-wbh8q\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.498674 master-0 kubenswrapper[19715]: I0313 12:51:13.490673 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.498674 master-0 kubenswrapper[19715]: I0313 12:51:13.494393 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6db3e185-395c-4d94-82a0-fb14978f626d-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517261 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-root\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517335 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517409 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74f20dbd-f800-4aab-8263-1bc2395c8123-metrics-client-ca\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517437 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-wtmp\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517476 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5lmb\" (UniqueName: \"kubernetes.io/projected/74f20dbd-f800-4aab-8263-1bc2395c8123-kube-api-access-l5lmb\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517506 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-textfile\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517531 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-tls\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517572 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517635 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-sys\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517773 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-sys\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.517815 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-root\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.521511 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.522254 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/74f20dbd-f800-4aab-8263-1bc2395c8123-metrics-client-ca\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.522449 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-wtmp\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.523640 master-0 kubenswrapper[19715]: I0313 12:51:13.523017 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-textfile\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.527695 master-0 kubenswrapper[19715]: I0313 12:51:13.527646 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: I0313 12:51:13.528036 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: E0313 12:51:13.528383 19715 secret.go:189] Couldn't get secret openshift-monitoring/openshift-state-metrics-tls: secret "openshift-state-metrics-tls" not found Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: E0313 12:51:13.528436 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls podName:6db3e185-395c-4d94-82a0-fb14978f626d nodeName:}" failed. No retries permitted until 2026-03-13 12:51:14.028415197 +0000 UTC m=+100.595087964 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openshift-state-metrics-tls" (UniqueName: "kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls") pod "openshift-state-metrics-74cc79fd76-vwks2" (UID: "6db3e185-395c-4d94-82a0-fb14978f626d") : secret "openshift-state-metrics-tls" not found Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: I0313 12:51:13.528694 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: I0313 12:51:13.528871 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-7t467" Mar 13 12:51:13.531663 master-0 kubenswrapper[19715]: I0313 12:51:13.529575 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:51:13.532051 master-0 kubenswrapper[19715]: I0313 12:51:13.531730 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x"] Mar 13 12:51:13.540798 master-0 kubenswrapper[19715]: I0313 12:51:13.540593 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/74f20dbd-f800-4aab-8263-1bc2395c8123-node-exporter-tls\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.606851 master-0 kubenswrapper[19715]: I0313 12:51:13.602269 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5lmb\" (UniqueName: \"kubernetes.io/projected/74f20dbd-f800-4aab-8263-1bc2395c8123-kube-api-access-l5lmb\") pod \"node-exporter-djlbx\" (UID: \"74f20dbd-f800-4aab-8263-1bc2395c8123\") " pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.606851 master-0 kubenswrapper[19715]: I0313 12:51:13.602886 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbh8q\" (UniqueName: \"kubernetes.io/projected/6db3e185-395c-4d94-82a0-fb14978f626d-kube-api-access-wbh8q\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620448 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620497 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620625 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620654 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620678 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brlvw\" (UniqueName: \"kubernetes.io/projected/f7495ca2-ee01-46f5-b210-5957f546270b-kube-api-access-brlvw\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.625656 master-0 kubenswrapper[19715]: I0313 12:51:13.620704 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f7495ca2-ee01-46f5-b210-5957f546270b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.662706 master-0 kubenswrapper[19715]: I0313 12:51:13.653032 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-djlbx" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.722296 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.722360 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.723228 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.723278 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.723330 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brlvw\" (UniqueName: \"kubernetes.io/projected/f7495ca2-ee01-46f5-b210-5957f546270b-kube-api-access-brlvw\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.723359 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f7495ca2-ee01-46f5-b210-5957f546270b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.724722 master-0 kubenswrapper[19715]: I0313 12:51:13.724059 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/f7495ca2-ee01-46f5-b210-5957f546270b-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.726989 master-0 kubenswrapper[19715]: I0313 12:51:13.726925 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.729676 master-0 kubenswrapper[19715]: I0313 12:51:13.728108 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.735945 master-0 kubenswrapper[19715]: I0313 12:51:13.734710 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/f7495ca2-ee01-46f5-b210-5957f546270b-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.742638 master-0 kubenswrapper[19715]: I0313 12:51:13.740155 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/f7495ca2-ee01-46f5-b210-5957f546270b-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.761674 master-0 kubenswrapper[19715]: I0313 12:51:13.757265 19715 scope.go:117] "RemoveContainer" containerID="e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9" Mar 13 12:51:13.761674 master-0 kubenswrapper[19715]: I0313 12:51:13.757317 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-djlbx" event={"ID":"74f20dbd-f800-4aab-8263-1bc2395c8123","Type":"ContainerStarted","Data":"cda7b398cff70d8b2abc5bf47eb876445ad87ab254a3729e08c79f7b54567fae"} Mar 13 12:51:13.761674 master-0 kubenswrapper[19715]: E0313 12:51:13.757491 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"console-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=console-operator pod=console-operator-6c7fb6b958-qhg45_openshift-console-operator(572e278b-c463-49b0-a198-49bd9e2c288c)\"" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podUID="572e278b-c463-49b0-a198-49bd9e2c288c" Mar 13 12:51:13.768666 master-0 kubenswrapper[19715]: I0313 12:51:13.767343 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brlvw\" (UniqueName: \"kubernetes.io/projected/f7495ca2-ee01-46f5-b210-5957f546270b-kube-api-access-brlvw\") pod \"kube-state-metrics-68b88f8cb5-lz46x\" (UID: \"f7495ca2-ee01-46f5-b210-5957f546270b\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:13.961327 master-0 kubenswrapper[19715]: I0313 12:51:13.961178 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" Mar 13 12:51:14.030057 master-0 kubenswrapper[19715]: I0313 12:51:14.029997 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:14.037335 master-0 kubenswrapper[19715]: I0313 12:51:14.037272 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/6db3e185-395c-4d94-82a0-fb14978f626d-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-vwks2\" (UID: \"6db3e185-395c-4d94-82a0-fb14978f626d\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:14.287806 master-0 kubenswrapper[19715]: I0313 12:51:14.287730 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" Mar 13 12:51:14.428939 master-0 kubenswrapper[19715]: I0313 12:51:14.428710 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:51:14.431995 master-0 kubenswrapper[19715]: I0313 12:51:14.431871 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.435457 master-0 kubenswrapper[19715]: I0313 12:51:14.434745 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:51:14.435457 master-0 kubenswrapper[19715]: I0313 12:51:14.435344 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:51:14.438210 master-0 kubenswrapper[19715]: I0313 12:51:14.435718 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mjc6s" Mar 13 12:51:14.438210 master-0 kubenswrapper[19715]: I0313 12:51:14.435858 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:51:14.438210 master-0 kubenswrapper[19715]: I0313 12:51:14.436313 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:51:14.438210 master-0 kubenswrapper[19715]: I0313 12:51:14.436491 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:51:14.438210 master-0 kubenswrapper[19715]: I0313 12:51:14.436704 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:51:14.442718 master-0 kubenswrapper[19715]: I0313 12:51:14.442662 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:51:14.454350 master-0 kubenswrapper[19715]: I0313 12:51:14.453728 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:51:14.475577 master-0 kubenswrapper[19715]: I0313 12:51:14.475424 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700282 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700353 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700378 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700422 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700441 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700478 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700550 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700626 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9c82r\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700709 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700737 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700785 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.703638 master-0 kubenswrapper[19715]: I0313 12:51:14.700811 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.784879 master-0 kubenswrapper[19715]: I0313 12:51:14.778702 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x"] Mar 13 12:51:14.836103 master-0 kubenswrapper[19715]: I0313 12:51:14.836012 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.836103 master-0 kubenswrapper[19715]: I0313 12:51:14.836115 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.836672 master-0 kubenswrapper[19715]: I0313 12:51:14.836141 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.836672 master-0 kubenswrapper[19715]: I0313 12:51:14.836159 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.838678 master-0 kubenswrapper[19715]: I0313 12:51:14.838011 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839125 master-0 kubenswrapper[19715]: I0313 12:51:14.838959 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839214 master-0 kubenswrapper[19715]: I0313 12:51:14.839128 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839292 master-0 kubenswrapper[19715]: I0313 12:51:14.839210 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839292 master-0 kubenswrapper[19715]: I0313 12:51:14.839259 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9c82r\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839447 master-0 kubenswrapper[19715]: I0313 12:51:14.839325 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839447 master-0 kubenswrapper[19715]: I0313 12:51:14.839369 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839447 master-0 kubenswrapper[19715]: I0313 12:51:14.839399 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.839447 master-0 kubenswrapper[19715]: I0313 12:51:14.839439 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.844860 master-0 kubenswrapper[19715]: I0313 12:51:14.842386 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.844860 master-0 kubenswrapper[19715]: I0313 12:51:14.842894 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.848119 master-0 kubenswrapper[19715]: W0313 12:51:14.848061 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7495ca2_ee01_46f5_b210_5957f546270b.slice/crio-d23ef90b58fc19790b5863878e3309322920fda44d67e6c7d5e8ae35648aaf4b WatchSource:0}: Error finding container d23ef90b58fc19790b5863878e3309322920fda44d67e6c7d5e8ae35648aaf4b: Status 404 returned error can't find the container with id d23ef90b58fc19790b5863878e3309322920fda44d67e6c7d5e8ae35648aaf4b Mar 13 12:51:14.849716 master-0 kubenswrapper[19715]: I0313 12:51:14.849678 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.867040 master-0 kubenswrapper[19715]: I0313 12:51:14.862167 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.867040 master-0 kubenswrapper[19715]: I0313 12:51:14.861462 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.875690 master-0 kubenswrapper[19715]: I0313 12:51:14.869048 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.875690 master-0 kubenswrapper[19715]: I0313 12:51:14.874903 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.875690 master-0 kubenswrapper[19715]: I0313 12:51:14.875029 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9c82r\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.875690 master-0 kubenswrapper[19715]: I0313 12:51:14.875693 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.876292 master-0 kubenswrapper[19715]: I0313 12:51:14.875674 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:14.881972 master-0 kubenswrapper[19715]: I0313 12:51:14.878618 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:15.084363 master-0 kubenswrapper[19715]: I0313 12:51:15.082727 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:51:15.182797 master-0 kubenswrapper[19715]: I0313 12:51:15.182563 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:51:15.183372 master-0 kubenswrapper[19715]: I0313 12:51:15.183320 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" containerID="cri-o://8bbe2b167360adebde379cc68ee3aad636ef3d2f38f94109c552e500950eb3b4" gracePeriod=30 Mar 13 12:51:15.251921 master-0 kubenswrapper[19715]: I0313 12:51:15.251402 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:51:15.251921 master-0 kubenswrapper[19715]: I0313 12:51:15.251752 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" podUID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" containerName="route-controller-manager" containerID="cri-o://278d68915cc7294ac01aa5d48357a22b6b3777b90445159f08c7639fb945a121" gracePeriod=30 Mar 13 12:51:15.284470 master-0 kubenswrapper[19715]: I0313 12:51:15.284394 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2"] Mar 13 12:51:15.839137 master-0 kubenswrapper[19715]: I0313 12:51:15.839067 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" event={"ID":"f7495ca2-ee01-46f5-b210-5957f546270b","Type":"ContainerStarted","Data":"d23ef90b58fc19790b5863878e3309322920fda44d67e6c7d5e8ae35648aaf4b"} Mar 13 12:51:15.864387 master-0 kubenswrapper[19715]: I0313 12:51:15.863966 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:51:16.416699 master-0 kubenswrapper[19715]: W0313 12:51:16.416558 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb537079_a878_4105_9055_7bc9d93a0333.slice/crio-98f06749f6de1d47d550220a1e1d42935e35af40efb8ae8fffcf76492a2cffa2 WatchSource:0}: Error finding container 98f06749f6de1d47d550220a1e1d42935e35af40efb8ae8fffcf76492a2cffa2: Status 404 returned error can't find the container with id 98f06749f6de1d47d550220a1e1d42935e35af40efb8ae8fffcf76492a2cffa2 Mar 13 12:51:16.479982 master-0 kubenswrapper[19715]: I0313 12:51:16.479325 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-8fc4dc979-blhgb"] Mar 13 12:51:16.511062 master-0 kubenswrapper[19715]: I0313 12:51:16.510999 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8fc4dc979-blhgb"] Mar 13 12:51:16.511469 master-0 kubenswrapper[19715]: I0313 12:51:16.511449 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.517253 master-0 kubenswrapper[19715]: I0313 12:51:16.517202 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 12:51:16.518422 master-0 kubenswrapper[19715]: I0313 12:51:16.518379 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 12:51:16.518628 master-0 kubenswrapper[19715]: I0313 12:51:16.517948 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7pbjup2gcsfqa" Mar 13 12:51:16.518885 master-0 kubenswrapper[19715]: I0313 12:51:16.518072 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 12:51:16.519097 master-0 kubenswrapper[19715]: I0313 12:51:16.518245 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 12:51:16.519312 master-0 kubenswrapper[19715]: I0313 12:51:16.518323 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zllxz" Mar 13 12:51:16.519556 master-0 kubenswrapper[19715]: I0313 12:51:16.519368 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e50ad5-6999-441a-86ef-d56e490d0d75-metrics-client-ca\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.520463 master-0 kubenswrapper[19715]: I0313 12:51:16.520430 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.520646 master-0 kubenswrapper[19715]: I0313 12:51:16.520623 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.520851 master-0 kubenswrapper[19715]: I0313 12:51:16.520828 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.522350 master-0 kubenswrapper[19715]: I0313 12:51:16.522316 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.522671 master-0 kubenswrapper[19715]: I0313 12:51:16.522647 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.522791 master-0 kubenswrapper[19715]: I0313 12:51:16.522770 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ctlp\" (UniqueName: \"kubernetes.io/projected/d9e50ad5-6999-441a-86ef-d56e490d0d75-kube-api-access-9ctlp\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.522964 master-0 kubenswrapper[19715]: I0313 12:51:16.522941 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-grpc-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.547623 master-0 kubenswrapper[19715]: I0313 12:51:16.543294 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 12:51:16.649359 master-0 kubenswrapper[19715]: I0313 12:51:16.649278 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.649613 master-0 kubenswrapper[19715]: I0313 12:51:16.649556 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.649788 master-0 kubenswrapper[19715]: I0313 12:51:16.649766 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.649907 master-0 kubenswrapper[19715]: I0313 12:51:16.649889 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.650128 master-0 kubenswrapper[19715]: I0313 12:51:16.650108 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.650243 master-0 kubenswrapper[19715]: I0313 12:51:16.650226 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ctlp\" (UniqueName: \"kubernetes.io/projected/d9e50ad5-6999-441a-86ef-d56e490d0d75-kube-api-access-9ctlp\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.650398 master-0 kubenswrapper[19715]: I0313 12:51:16.650380 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-grpc-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.650557 master-0 kubenswrapper[19715]: I0313 12:51:16.650538 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e50ad5-6999-441a-86ef-d56e490d0d75-metrics-client-ca\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.651825 master-0 kubenswrapper[19715]: I0313 12:51:16.651801 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d9e50ad5-6999-441a-86ef-d56e490d0d75-metrics-client-ca\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.671700 master-0 kubenswrapper[19715]: I0313 12:51:16.668556 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.671700 master-0 kubenswrapper[19715]: I0313 12:51:16.671537 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.683098 master-0 kubenswrapper[19715]: I0313 12:51:16.682915 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.686704 master-0 kubenswrapper[19715]: I0313 12:51:16.683767 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.687937 master-0 kubenswrapper[19715]: I0313 12:51:16.687530 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-grpc-tls\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.687937 master-0 kubenswrapper[19715]: I0313 12:51:16.687530 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/d9e50ad5-6999-441a-86ef-d56e490d0d75-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.714664 master-0 kubenswrapper[19715]: I0313 12:51:16.714164 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ctlp\" (UniqueName: \"kubernetes.io/projected/d9e50ad5-6999-441a-86ef-d56e490d0d75-kube-api-access-9ctlp\") pod \"thanos-querier-8fc4dc979-blhgb\" (UID: \"d9e50ad5-6999-441a-86ef-d56e490d0d75\") " pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.852462 19715 generic.go:334] "Generic (PLEG): container finished" podID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" containerID="278d68915cc7294ac01aa5d48357a22b6b3777b90445159f08c7639fb945a121" exitCode=0 Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.852543 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" event={"ID":"ef1dbe95-a46f-4d09-87b0-f51429f2d82c","Type":"ContainerDied","Data":"278d68915cc7294ac01aa5d48357a22b6b3777b90445159f08c7639fb945a121"} Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.859034 19715 generic.go:334] "Generic (PLEG): container finished" podID="7343df96-cba2-477b-8a1b-7af369620440" containerID="8bbe2b167360adebde379cc68ee3aad636ef3d2f38f94109c552e500950eb3b4" exitCode=0 Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.859134 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerDied","Data":"8bbe2b167360adebde379cc68ee3aad636ef3d2f38f94109c552e500950eb3b4"} Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.859189 19715 scope.go:117] "RemoveContainer" containerID="2da3308778e062a9343f0d3dfdc8d6eb4f753f82d1909a294c12d86a1ca52396" Mar 13 12:51:16.866726 master-0 kubenswrapper[19715]: I0313 12:51:16.865041 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"98f06749f6de1d47d550220a1e1d42935e35af40efb8ae8fffcf76492a2cffa2"} Mar 13 12:51:16.873600 master-0 kubenswrapper[19715]: I0313 12:51:16.873533 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" event={"ID":"6db3e185-395c-4d94-82a0-fb14978f626d","Type":"ContainerStarted","Data":"a05b5f958dddbdba80355a3c7f034e3c432cd66f939543f9958ae07e0fb6b3c8"} Mar 13 12:51:16.873918 master-0 kubenswrapper[19715]: I0313 12:51:16.873613 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" event={"ID":"6db3e185-395c-4d94-82a0-fb14978f626d","Type":"ContainerStarted","Data":"4f7d3fe974ceed989f6fa9e3741b26ab49b1d9e6e455b394ee6d1c5869239299"} Mar 13 12:51:16.930802 master-0 kubenswrapper[19715]: I0313 12:51:16.930620 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:17.024356 master-0 kubenswrapper[19715]: I0313 12:51:17.024270 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:51:17.059395 master-0 kubenswrapper[19715]: I0313 12:51:17.059321 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") pod \"7343df96-cba2-477b-8a1b-7af369620440\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " Mar 13 12:51:17.059395 master-0 kubenswrapper[19715]: I0313 12:51:17.059369 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") pod \"7343df96-cba2-477b-8a1b-7af369620440\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " Mar 13 12:51:17.059395 master-0 kubenswrapper[19715]: I0313 12:51:17.059400 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") pod \"7343df96-cba2-477b-8a1b-7af369620440\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " Mar 13 12:51:17.059858 master-0 kubenswrapper[19715]: I0313 12:51:17.059435 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") pod \"7343df96-cba2-477b-8a1b-7af369620440\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " Mar 13 12:51:17.059858 master-0 kubenswrapper[19715]: I0313 12:51:17.059477 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") pod \"7343df96-cba2-477b-8a1b-7af369620440\" (UID: \"7343df96-cba2-477b-8a1b-7af369620440\") " Mar 13 12:51:17.059858 master-0 kubenswrapper[19715]: I0313 12:51:17.059568 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7d4cc84557-q2l7j"] Mar 13 12:51:17.060038 master-0 kubenswrapper[19715]: E0313 12:51:17.059945 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060038 master-0 kubenswrapper[19715]: I0313 12:51:17.059973 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060038 master-0 kubenswrapper[19715]: E0313 12:51:17.059987 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060038 master-0 kubenswrapper[19715]: I0313 12:51:17.059994 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060243 master-0 kubenswrapper[19715]: I0313 12:51:17.060187 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060243 master-0 kubenswrapper[19715]: I0313 12:51:17.060215 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="7343df96-cba2-477b-8a1b-7af369620440" containerName="controller-manager" Mar 13 12:51:17.060957 master-0 kubenswrapper[19715]: I0313 12:51:17.060864 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.063416 master-0 kubenswrapper[19715]: I0313 12:51:17.063259 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7343df96-cba2-477b-8a1b-7af369620440" (UID: "7343df96-cba2-477b-8a1b-7af369620440"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:17.063788 master-0 kubenswrapper[19715]: I0313 12:51:17.063732 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config" (OuterVolumeSpecName: "config") pod "7343df96-cba2-477b-8a1b-7af369620440" (UID: "7343df96-cba2-477b-8a1b-7af369620440"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:17.105667 master-0 kubenswrapper[19715]: I0313 12:51:17.066938 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7343df96-cba2-477b-8a1b-7af369620440" (UID: "7343df96-cba2-477b-8a1b-7af369620440"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:51:17.105667 master-0 kubenswrapper[19715]: I0313 12:51:17.067145 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h5lt2" Mar 13 12:51:17.105667 master-0 kubenswrapper[19715]: I0313 12:51:17.067488 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca" (OuterVolumeSpecName: "client-ca") pod "7343df96-cba2-477b-8a1b-7af369620440" (UID: "7343df96-cba2-477b-8a1b-7af369620440"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:17.142402 master-0 kubenswrapper[19715]: I0313 12:51:17.112567 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m" (OuterVolumeSpecName: "kube-api-access-6vg7m") pod "7343df96-cba2-477b-8a1b-7af369620440" (UID: "7343df96-cba2-477b-8a1b-7af369620440"). InnerVolumeSpecName "kube-api-access-6vg7m". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:17.142402 master-0 kubenswrapper[19715]: I0313 12:51:17.120393 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cc84557-q2l7j"] Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.158989 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161048 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-proxy-ca-bundles\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161095 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flmgx\" (UniqueName: \"kubernetes.io/projected/22d72a5e-d090-459b-8301-ddf0bacb0847-kube-api-access-flmgx\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161122 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-client-ca\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161163 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-config\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161193 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d72a5e-d090-459b-8301-ddf0bacb0847-serving-cert\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161268 19715 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161281 19715 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7343df96-cba2-477b-8a1b-7af369620440-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161291 19715 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161301 19715 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7343df96-cba2-477b-8a1b-7af369620440-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.163127 master-0 kubenswrapper[19715]: I0313 12:51:17.161310 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vg7m\" (UniqueName: \"kubernetes.io/projected/7343df96-cba2-477b-8a1b-7af369620440-kube-api-access-6vg7m\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.262273 master-0 kubenswrapper[19715]: I0313 12:51:17.262108 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") pod \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " Mar 13 12:51:17.262273 master-0 kubenswrapper[19715]: I0313 12:51:17.262272 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") pod \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " Mar 13 12:51:17.262645 master-0 kubenswrapper[19715]: I0313 12:51:17.262324 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") pod \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " Mar 13 12:51:17.262645 master-0 kubenswrapper[19715]: I0313 12:51:17.262384 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") pod \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\" (UID: \"ef1dbe95-a46f-4d09-87b0-f51429f2d82c\") " Mar 13 12:51:17.262645 master-0 kubenswrapper[19715]: I0313 12:51:17.262621 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-proxy-ca-bundles\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.262763 master-0 kubenswrapper[19715]: I0313 12:51:17.262659 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flmgx\" (UniqueName: \"kubernetes.io/projected/22d72a5e-d090-459b-8301-ddf0bacb0847-kube-api-access-flmgx\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.262763 master-0 kubenswrapper[19715]: I0313 12:51:17.262693 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-client-ca\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.262763 master-0 kubenswrapper[19715]: I0313 12:51:17.262750 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-config\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.262871 master-0 kubenswrapper[19715]: I0313 12:51:17.262790 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d72a5e-d090-459b-8301-ddf0bacb0847-serving-cert\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.264380 master-0 kubenswrapper[19715]: I0313 12:51:17.264292 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca" (OuterVolumeSpecName: "client-ca") pod "ef1dbe95-a46f-4d09-87b0-f51429f2d82c" (UID: "ef1dbe95-a46f-4d09-87b0-f51429f2d82c"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:17.264811 master-0 kubenswrapper[19715]: I0313 12:51:17.264768 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-client-ca\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.265335 master-0 kubenswrapper[19715]: I0313 12:51:17.265285 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-config\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.265405 master-0 kubenswrapper[19715]: I0313 12:51:17.265366 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config" (OuterVolumeSpecName: "config") pod "ef1dbe95-a46f-4d09-87b0-f51429f2d82c" (UID: "ef1dbe95-a46f-4d09-87b0-f51429f2d82c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:17.265556 master-0 kubenswrapper[19715]: I0313 12:51:17.265526 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/22d72a5e-d090-459b-8301-ddf0bacb0847-proxy-ca-bundles\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.268503 master-0 kubenswrapper[19715]: I0313 12:51:17.268460 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/22d72a5e-d090-459b-8301-ddf0bacb0847-serving-cert\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.269051 master-0 kubenswrapper[19715]: I0313 12:51:17.269012 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9" (OuterVolumeSpecName: "kube-api-access-64hl9") pod "ef1dbe95-a46f-4d09-87b0-f51429f2d82c" (UID: "ef1dbe95-a46f-4d09-87b0-f51429f2d82c"). InnerVolumeSpecName "kube-api-access-64hl9". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:17.269926 master-0 kubenswrapper[19715]: I0313 12:51:17.269863 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ef1dbe95-a46f-4d09-87b0-f51429f2d82c" (UID: "ef1dbe95-a46f-4d09-87b0-f51429f2d82c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:51:17.282817 master-0 kubenswrapper[19715]: I0313 12:51:17.282762 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flmgx\" (UniqueName: \"kubernetes.io/projected/22d72a5e-d090-459b-8301-ddf0bacb0847-kube-api-access-flmgx\") pod \"controller-manager-7d4cc84557-q2l7j\" (UID: \"22d72a5e-d090-459b-8301-ddf0bacb0847\") " pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.364551 master-0 kubenswrapper[19715]: I0313 12:51:17.364500 19715 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.364551 master-0 kubenswrapper[19715]: I0313 12:51:17.364537 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64hl9\" (UniqueName: \"kubernetes.io/projected/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-kube-api-access-64hl9\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.364551 master-0 kubenswrapper[19715]: I0313 12:51:17.364547 19715 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.364551 master-0 kubenswrapper[19715]: I0313 12:51:17.364556 19715 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ef1dbe95-a46f-4d09-87b0-f51429f2d82c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:17.452472 master-0 kubenswrapper[19715]: I0313 12:51:17.452376 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:17.479718 master-0 kubenswrapper[19715]: I0313 12:51:17.479613 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-8fc4dc979-blhgb"] Mar 13 12:51:17.882539 master-0 kubenswrapper[19715]: I0313 12:51:17.882468 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" event={"ID":"6db3e185-395c-4d94-82a0-fb14978f626d","Type":"ContainerStarted","Data":"1cecc52ae719ef0dc2c886520e987fcaab00f383cf8503e22e256c8ae557df9e"} Mar 13 12:51:17.883835 master-0 kubenswrapper[19715]: I0313 12:51:17.883671 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" event={"ID":"5e94a785-5cca-4645-b97d-7c4caf0c6c42","Type":"ContainerStarted","Data":"c1d1ba416909fefed054729c78e1901709cff857eda9f323c0ab4ed4af467bae"} Mar 13 12:51:17.887666 master-0 kubenswrapper[19715]: I0313 12:51:17.885800 19715 generic.go:334] "Generic (PLEG): container finished" podID="74f20dbd-f800-4aab-8263-1bc2395c8123" containerID="6a4c3949337c397a1144677ed3edc402ca8bd9de3f94bbcfe28aa57058e0ab58" exitCode=0 Mar 13 12:51:17.887666 master-0 kubenswrapper[19715]: I0313 12:51:17.885937 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-djlbx" event={"ID":"74f20dbd-f800-4aab-8263-1bc2395c8123","Type":"ContainerDied","Data":"6a4c3949337c397a1144677ed3edc402ca8bd9de3f94bbcfe28aa57058e0ab58"} Mar 13 12:51:17.891858 master-0 kubenswrapper[19715]: I0313 12:51:17.887892 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" Mar 13 12:51:17.891858 master-0 kubenswrapper[19715]: I0313 12:51:17.887893 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn" event={"ID":"ef1dbe95-a46f-4d09-87b0-f51429f2d82c","Type":"ContainerDied","Data":"1c5ece38636979dc6aaacdac426045ab401d2a85cb39e888cefc074380d03a96"} Mar 13 12:51:17.891858 master-0 kubenswrapper[19715]: I0313 12:51:17.887971 19715 scope.go:117] "RemoveContainer" containerID="278d68915cc7294ac01aa5d48357a22b6b3777b90445159f08c7639fb945a121" Mar 13 12:51:17.891858 master-0 kubenswrapper[19715]: I0313 12:51:17.889717 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"f2c5592163b60c8dfbfa8d4166c5a1f4e8deee909edb6822c6b14494766bd9ab"} Mar 13 12:51:17.891858 master-0 kubenswrapper[19715]: E0313 12:51:17.890378 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:17.898867 master-0 kubenswrapper[19715]: E0313 12:51:17.892400 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:17.905466 master-0 kubenswrapper[19715]: I0313 12:51:17.903472 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-svrbk" podStartSLOduration=3.591959688 podStartE2EDuration="7.903449422s" podCreationTimestamp="2026-03-13 12:51:10 +0000 UTC" firstStartedPulling="2026-03-13 12:51:12.181269154 +0000 UTC m=+98.747941911" lastFinishedPulling="2026-03-13 12:51:16.492758888 +0000 UTC m=+103.059431645" observedRunningTime="2026-03-13 12:51:17.90301607 +0000 UTC m=+104.469688857" watchObservedRunningTime="2026-03-13 12:51:17.903449422 +0000 UTC m=+104.470122179" Mar 13 12:51:17.908135 master-0 kubenswrapper[19715]: I0313 12:51:17.907747 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" event={"ID":"7343df96-cba2-477b-8a1b-7af369620440","Type":"ContainerDied","Data":"5868e4aaa495ba2002dc9f38876278ea8eced1d322d3455b76a22ad5843a0e53"} Mar 13 12:51:17.908135 master-0 kubenswrapper[19715]: I0313 12:51:17.907830 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t" Mar 13 12:51:17.916754 master-0 kubenswrapper[19715]: E0313 12:51:17.916658 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 12:51:17.917088 master-0 kubenswrapper[19715]: E0313 12:51:17.916796 19715 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:17.969460 master-0 kubenswrapper[19715]: I0313 12:51:17.965504 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:51:17.974622 master-0 kubenswrapper[19715]: I0313 12:51:17.974554 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9955d496f-8zbkn"] Mar 13 12:51:17.993848 master-0 kubenswrapper[19715]: I0313 12:51:17.993771 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:51:17.997915 master-0 kubenswrapper[19715]: I0313 12:51:17.997850 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5ff9c7cb47-f4k6t"] Mar 13 12:51:18.370370 master-0 kubenswrapper[19715]: I0313 12:51:18.370073 19715 scope.go:117] "RemoveContainer" containerID="8bbe2b167360adebde379cc68ee3aad636ef3d2f38f94109c552e500950eb3b4" Mar 13 12:51:18.803343 master-0 kubenswrapper[19715]: I0313 12:51:18.803207 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs"] Mar 13 12:51:18.804499 master-0 kubenswrapper[19715]: E0313 12:51:18.804474 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" containerName="route-controller-manager" Mar 13 12:51:18.804625 master-0 kubenswrapper[19715]: I0313 12:51:18.804611 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" containerName="route-controller-manager" Mar 13 12:51:18.807195 master-0 kubenswrapper[19715]: I0313 12:51:18.804883 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" containerName="route-controller-manager" Mar 13 12:51:18.808035 master-0 kubenswrapper[19715]: I0313 12:51:18.808009 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:18.811066 master-0 kubenswrapper[19715]: I0313 12:51:18.810755 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-wsv7b" Mar 13 12:51:18.812194 master-0 kubenswrapper[19715]: I0313 12:51:18.812153 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 12:51:18.817625 master-0 kubenswrapper[19715]: I0313 12:51:18.817550 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs"] Mar 13 12:51:18.877952 master-0 kubenswrapper[19715]: I0313 12:51:18.877868 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7fb9979c45-qlpfr"] Mar 13 12:51:18.884108 master-0 kubenswrapper[19715]: I0313 12:51:18.882398 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.886691 master-0 kubenswrapper[19715]: I0313 12:51:18.886621 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7d4cc84557-q2l7j"] Mar 13 12:51:18.889445 master-0 kubenswrapper[19715]: I0313 12:51:18.889268 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 12:51:18.889445 master-0 kubenswrapper[19715]: I0313 12:51:18.889304 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-p8xg8" Mar 13 12:51:18.889445 master-0 kubenswrapper[19715]: I0313 12:51:18.889349 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 12:51:18.889445 master-0 kubenswrapper[19715]: I0313 12:51:18.889419 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 12:51:18.889816 master-0 kubenswrapper[19715]: I0313 12:51:18.889721 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 12:51:18.889952 master-0 kubenswrapper[19715]: I0313 12:51:18.889873 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 12:51:18.891504 master-0 kubenswrapper[19715]: I0313 12:51:18.891456 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b0a016-5a0f-49e5-a4f1-687da89b6408-monitoring-plugin-cert\") pod \"monitoring-plugin-6f8b57985f-t4whs\" (UID: \"e9b0a016-5a0f-49e5-a4f1-687da89b6408\") " pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:18.891674 master-0 kubenswrapper[19715]: I0313 12:51:18.891557 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-serving-certs-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.891736 master-0 kubenswrapper[19715]: I0313 12:51:18.891692 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k259l\" (UniqueName: \"kubernetes.io/projected/badf8d0b-f96a-4919-aea5-a6510a2a2c03-kube-api-access-k259l\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.891779 master-0 kubenswrapper[19715]: I0313 12:51:18.891768 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.891829 master-0 kubenswrapper[19715]: I0313 12:51:18.891816 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.892242 master-0 kubenswrapper[19715]: I0313 12:51:18.891843 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-federate-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.892242 master-0 kubenswrapper[19715]: I0313 12:51:18.891888 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-metrics-client-ca\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.892325 master-0 kubenswrapper[19715]: I0313 12:51:18.892273 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.892369 master-0 kubenswrapper[19715]: I0313 12:51:18.892348 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.893433 master-0 kubenswrapper[19715]: I0313 12:51:18.893405 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fb9979c45-qlpfr"] Mar 13 12:51:18.900665 master-0 kubenswrapper[19715]: I0313 12:51:18.900223 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 12:51:18.980340 master-0 kubenswrapper[19715]: I0313 12:51:18.980284 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-6b94c647f5-cmzc9"] Mar 13 12:51:18.981616 master-0 kubenswrapper[19715]: I0313 12:51:18.981575 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.986595 master-0 kubenswrapper[19715]: I0313 12:51:18.986478 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-8qlr6" Mar 13 12:51:18.986834 master-0 kubenswrapper[19715]: I0313 12:51:18.986772 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-comnvpv6eh6ml" Mar 13 12:51:18.987002 master-0 kubenswrapper[19715]: I0313 12:51:18.986956 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:51:18.987256 master-0 kubenswrapper[19715]: I0313 12:51:18.987126 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:51:18.987256 master-0 kubenswrapper[19715]: I0313 12:51:18.987230 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:51:18.987431 master-0 kubenswrapper[19715]: I0313 12:51:18.987134 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:51:18.993388 master-0 kubenswrapper[19715]: I0313 12:51:18.993320 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.993388 master-0 kubenswrapper[19715]: I0313 12:51:18.993384 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6cca39b9-d6c1-486d-a286-6744d0a063bc-audit-log\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.993718 master-0 kubenswrapper[19715]: I0313 12:51:18.993412 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-federate-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.993718 master-0 kubenswrapper[19715]: I0313 12:51:18.993452 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99zs2\" (UniqueName: \"kubernetes.io/projected/6cca39b9-d6c1-486d-a286-6744d0a063bc-kube-api-access-99zs2\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.993718 master-0 kubenswrapper[19715]: I0313 12:51:18.993483 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-metrics-client-ca\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.993718 master-0 kubenswrapper[19715]: I0313 12:51:18.993661 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-client-certs\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995131 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-metrics-client-ca\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995454 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-server-tls\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995496 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995548 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-metrics-server-audit-profiles\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995583 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995642 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b0a016-5a0f-49e5-a4f1-687da89b6408-monitoring-plugin-cert\") pod \"monitoring-plugin-6f8b57985f-t4whs\" (UID: \"e9b0a016-5a0f-49e5-a4f1-687da89b6408\") " pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995670 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-serving-certs-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995754 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-client-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995806 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k259l\" (UniqueName: \"kubernetes.io/projected/badf8d0b-f96a-4919-aea5-a6510a2a2c03-kube-api-access-k259l\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995831 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:18.996752 master-0 kubenswrapper[19715]: I0313 12:51:18.995982 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.000101 master-0 kubenswrapper[19715]: I0313 12:51:18.998421 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b94c647f5-cmzc9"] Mar 13 12:51:19.000101 master-0 kubenswrapper[19715]: I0313 12:51:18.999443 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-serving-certs-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.000101 master-0 kubenswrapper[19715]: I0313 12:51:18.999978 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.000308 master-0 kubenswrapper[19715]: I0313 12:51:19.000125 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-federate-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.002118 master-0 kubenswrapper[19715]: I0313 12:51:19.002041 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.002482 master-0 kubenswrapper[19715]: I0313 12:51:19.002422 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-telemeter-client-tls\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.004045 master-0 kubenswrapper[19715]: I0313 12:51:19.004008 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/badf8d0b-f96a-4919-aea5-a6510a2a2c03-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.004656 master-0 kubenswrapper[19715]: I0313 12:51:19.004538 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/e9b0a016-5a0f-49e5-a4f1-687da89b6408-monitoring-plugin-cert\") pod \"monitoring-plugin-6f8b57985f-t4whs\" (UID: \"e9b0a016-5a0f-49e5-a4f1-687da89b6408\") " pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:19.031795 master-0 kubenswrapper[19715]: I0313 12:51:19.031737 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k259l\" (UniqueName: \"kubernetes.io/projected/badf8d0b-f96a-4919-aea5-a6510a2a2c03-kube-api-access-k259l\") pod \"telemeter-client-7fb9979c45-qlpfr\" (UID: \"badf8d0b-f96a-4919-aea5-a6510a2a2c03\") " pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.069002 master-0 kubenswrapper[19715]: W0313 12:51:19.068550 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod22d72a5e_d090_459b_8301_ddf0bacb0847.slice/crio-a61cf3fc241eb0bc1e321ec220735fee6f34bf76bb116dfecfa8ddaa4f3b7ba6 WatchSource:0}: Error finding container a61cf3fc241eb0bc1e321ec220735fee6f34bf76bb116dfecfa8ddaa4f3b7ba6: Status 404 returned error can't find the container with id a61cf3fc241eb0bc1e321ec220735fee6f34bf76bb116dfecfa8ddaa4f3b7ba6 Mar 13 12:51:19.096713 master-0 kubenswrapper[19715]: I0313 12:51:19.096641 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-client-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.096713 master-0 kubenswrapper[19715]: I0313 12:51:19.096697 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.097123 master-0 kubenswrapper[19715]: I0313 12:51:19.096741 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6cca39b9-d6c1-486d-a286-6744d0a063bc-audit-log\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.097123 master-0 kubenswrapper[19715]: I0313 12:51:19.096932 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99zs2\" (UniqueName: \"kubernetes.io/projected/6cca39b9-d6c1-486d-a286-6744d0a063bc-kube-api-access-99zs2\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.097123 master-0 kubenswrapper[19715]: I0313 12:51:19.097014 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-client-certs\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.098804 master-0 kubenswrapper[19715]: I0313 12:51:19.097362 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/6cca39b9-d6c1-486d-a286-6744d0a063bc-audit-log\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.098804 master-0 kubenswrapper[19715]: I0313 12:51:19.097516 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-server-tls\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.098804 master-0 kubenswrapper[19715]: I0313 12:51:19.098498 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.099863 master-0 kubenswrapper[19715]: I0313 12:51:19.099788 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-metrics-server-audit-profiles\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.102347 master-0 kubenswrapper[19715]: I0313 12:51:19.101422 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-client-ca-bundle\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.102347 master-0 kubenswrapper[19715]: I0313 12:51:19.101487 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-client-certs\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.102347 master-0 kubenswrapper[19715]: I0313 12:51:19.101512 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/6cca39b9-d6c1-486d-a286-6744d0a063bc-metrics-server-audit-profiles\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.104945 master-0 kubenswrapper[19715]: I0313 12:51:19.104857 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/6cca39b9-d6c1-486d-a286-6744d0a063bc-secret-metrics-server-tls\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.119650 master-0 kubenswrapper[19715]: I0313 12:51:19.119607 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99zs2\" (UniqueName: \"kubernetes.io/projected/6cca39b9-d6c1-486d-a286-6744d0a063bc-kube-api-access-99zs2\") pod \"metrics-server-6b94c647f5-cmzc9\" (UID: \"6cca39b9-d6c1-486d-a286-6744d0a063bc\") " pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.150069 master-0 kubenswrapper[19715]: I0313 12:51:19.150016 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:19.274271 master-0 kubenswrapper[19715]: I0313 12:51:19.274000 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:51:19.274813 master-0 kubenswrapper[19715]: I0313 12:51:19.274362 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" Mar 13 12:51:19.280197 master-0 kubenswrapper[19715]: I0313 12:51:19.280155 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/cbd86a78-769d-4abc-b02d-48d52d9937c4-cert\") pod \"ingress-canary-ddjx7\" (UID: \"cbd86a78-769d-4abc-b02d-48d52d9937c4\") " pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:51:19.302361 master-0 kubenswrapper[19715]: I0313 12:51:19.302305 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56867db46-krp8x"] Mar 13 12:51:19.303252 master-0 kubenswrapper[19715]: I0313 12:51:19.303224 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.305556 master-0 kubenswrapper[19715]: I0313 12:51:19.305501 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-qggps" Mar 13 12:51:19.305652 master-0 kubenswrapper[19715]: I0313 12:51:19.305571 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:51:19.307140 master-0 kubenswrapper[19715]: I0313 12:51:19.305831 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:51:19.307140 master-0 kubenswrapper[19715]: I0313 12:51:19.305987 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:51:19.307632 master-0 kubenswrapper[19715]: I0313 12:51:19.307607 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:51:19.307817 master-0 kubenswrapper[19715]: I0313 12:51:19.307790 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:51:19.317037 master-0 kubenswrapper[19715]: I0313 12:51:19.313097 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-ddjx7" Mar 13 12:51:19.317037 master-0 kubenswrapper[19715]: I0313 12:51:19.314841 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56867db46-krp8x"] Mar 13 12:51:19.355411 master-0 kubenswrapper[19715]: I0313 12:51:19.355351 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:19.379416 master-0 kubenswrapper[19715]: I0313 12:51:19.379340 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-client-ca\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.379744 master-0 kubenswrapper[19715]: I0313 12:51:19.379445 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-config\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.379744 master-0 kubenswrapper[19715]: I0313 12:51:19.379486 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p487h\" (UniqueName: \"kubernetes.io/projected/12f7a988-944b-4adf-9be7-7e41a28e56bc-kube-api-access-p487h\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.379744 master-0 kubenswrapper[19715]: I0313 12:51:19.379554 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f7a988-944b-4adf-9be7-7e41a28e56bc-serving-cert\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.481804 master-0 kubenswrapper[19715]: I0313 12:51:19.481018 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f7a988-944b-4adf-9be7-7e41a28e56bc-serving-cert\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.481804 master-0 kubenswrapper[19715]: I0313 12:51:19.481112 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-client-ca\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.481804 master-0 kubenswrapper[19715]: I0313 12:51:19.481166 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-config\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.481804 master-0 kubenswrapper[19715]: I0313 12:51:19.481202 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p487h\" (UniqueName: \"kubernetes.io/projected/12f7a988-944b-4adf-9be7-7e41a28e56bc-kube-api-access-p487h\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.483077 master-0 kubenswrapper[19715]: I0313 12:51:19.482977 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-client-ca\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.484348 master-0 kubenswrapper[19715]: I0313 12:51:19.484177 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/12f7a988-944b-4adf-9be7-7e41a28e56bc-config\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.492006 master-0 kubenswrapper[19715]: I0313 12:51:19.491055 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/12f7a988-944b-4adf-9be7-7e41a28e56bc-serving-cert\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.507296 master-0 kubenswrapper[19715]: I0313 12:51:19.507242 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p487h\" (UniqueName: \"kubernetes.io/projected/12f7a988-944b-4adf-9be7-7e41a28e56bc-kube-api-access-p487h\") pod \"route-controller-manager-56867db46-krp8x\" (UID: \"12f7a988-944b-4adf-9be7-7e41a28e56bc\") " pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.710008 master-0 kubenswrapper[19715]: I0313 12:51:19.701432 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:19.860976 master-0 kubenswrapper[19715]: I0313 12:51:19.857508 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7343df96-cba2-477b-8a1b-7af369620440" path="/var/lib/kubelet/pods/7343df96-cba2-477b-8a1b-7af369620440/volumes" Mar 13 12:51:19.865029 master-0 kubenswrapper[19715]: I0313 12:51:19.863595 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef1dbe95-a46f-4d09-87b0-f51429f2d82c" path="/var/lib/kubelet/pods/ef1dbe95-a46f-4d09-87b0-f51429f2d82c/volumes" Mar 13 12:51:20.010017 master-0 kubenswrapper[19715]: I0313 12:51:20.009920 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" event={"ID":"f7495ca2-ee01-46f5-b210-5957f546270b","Type":"ContainerStarted","Data":"75bccba0130e4b369faae7e7f3f90b1dfd31e9c4e13bbe417bc432f076e7036e"} Mar 13 12:51:20.016522 master-0 kubenswrapper[19715]: I0313 12:51:20.016016 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8" exitCode=0 Mar 13 12:51:20.016522 master-0 kubenswrapper[19715]: I0313 12:51:20.016186 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8"} Mar 13 12:51:20.066928 master-0 kubenswrapper[19715]: I0313 12:51:20.063084 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" event={"ID":"22d72a5e-d090-459b-8301-ddf0bacb0847","Type":"ContainerStarted","Data":"c51fbe8ddce74b5b3e0e7403adeb6996da20dac802a20accdf755cf78b139513"} Mar 13 12:51:20.066928 master-0 kubenswrapper[19715]: I0313 12:51:20.063144 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" event={"ID":"22d72a5e-d090-459b-8301-ddf0bacb0847","Type":"ContainerStarted","Data":"a61cf3fc241eb0bc1e321ec220735fee6f34bf76bb116dfecfa8ddaa4f3b7ba6"} Mar 13 12:51:20.066928 master-0 kubenswrapper[19715]: I0313 12:51:20.064365 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:20.128587 master-0 kubenswrapper[19715]: I0313 12:51:20.128123 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-djlbx" event={"ID":"74f20dbd-f800-4aab-8263-1bc2395c8123","Type":"ContainerStarted","Data":"343ab3c630be4357963c45fed0068c2cde0cf3427b6ec3eb4d66fea942892b23"} Mar 13 12:51:20.159744 master-0 kubenswrapper[19715]: I0313 12:51:20.156221 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" Mar 13 12:51:20.238527 master-0 kubenswrapper[19715]: I0313 12:51:20.235826 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7d4cc84557-q2l7j" podStartSLOduration=5.235802355 podStartE2EDuration="5.235802355s" podCreationTimestamp="2026-03-13 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:20.156574232 +0000 UTC m=+106.723246999" watchObservedRunningTime="2026-03-13 12:51:20.235802355 +0000 UTC m=+106.802475122" Mar 13 12:51:20.264049 master-0 kubenswrapper[19715]: I0313 12:51:20.263811 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs"] Mar 13 12:51:20.356072 master-0 kubenswrapper[19715]: I0313 12:51:20.355021 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-6b94c647f5-cmzc9"] Mar 13 12:51:20.405986 master-0 kubenswrapper[19715]: I0313 12:51:20.399803 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7fb9979c45-qlpfr"] Mar 13 12:51:20.428459 master-0 kubenswrapper[19715]: I0313 12:51:20.426287 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:51:20.463323 master-0 kubenswrapper[19715]: I0313 12:51:20.437850 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.475511 master-0 kubenswrapper[19715]: I0313 12:51:20.474913 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:51:20.480444 master-0 kubenswrapper[19715]: I0313 12:51:20.480352 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:51:20.494619 master-0 kubenswrapper[19715]: I0313 12:51:20.481615 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:51:20.495398 master-0 kubenswrapper[19715]: I0313 12:51:20.482237 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:51:20.495579 master-0 kubenswrapper[19715]: I0313 12:51:20.482291 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-84k2dnesbumig" Mar 13 12:51:20.495767 master-0 kubenswrapper[19715]: I0313 12:51:20.483213 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:51:20.495917 master-0 kubenswrapper[19715]: I0313 12:51:20.483287 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:51:20.496129 master-0 kubenswrapper[19715]: I0313 12:51:20.483389 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:51:20.496302 master-0 kubenswrapper[19715]: I0313 12:51:20.484555 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:51:20.496479 master-0 kubenswrapper[19715]: I0313 12:51:20.484616 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zzprz" Mar 13 12:51:20.496711 master-0 kubenswrapper[19715]: I0313 12:51:20.484653 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:51:20.496861 master-0 kubenswrapper[19715]: I0313 12:51:20.484690 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:51:20.500825 master-0 kubenswrapper[19715]: I0313 12:51:20.490053 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:51:20.517118 master-0 kubenswrapper[19715]: I0313 12:51:20.516029 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:51:20.542015 master-0 kubenswrapper[19715]: I0313 12:51:20.541958 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542015 master-0 kubenswrapper[19715]: I0313 12:51:20.542005 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542031 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542059 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542078 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542102 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5m4l\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542124 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542142 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542168 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542204 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542227 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542249 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542265 master-0 kubenswrapper[19715]: I0313 12:51:20.542265 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542691 master-0 kubenswrapper[19715]: I0313 12:51:20.542289 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542691 master-0 kubenswrapper[19715]: I0313 12:51:20.542305 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542691 master-0 kubenswrapper[19715]: I0313 12:51:20.542320 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542691 master-0 kubenswrapper[19715]: I0313 12:51:20.542346 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.542691 master-0 kubenswrapper[19715]: I0313 12:51:20.542361 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.635793 master-0 kubenswrapper[19715]: I0313 12:51:20.629211 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-56867db46-krp8x"] Mar 13 12:51:20.640690 master-0 kubenswrapper[19715]: W0313 12:51:20.639774 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12f7a988_944b_4adf_9be7_7e41a28e56bc.slice/crio-5696e2a08c8067b033f203d5a12c63e59190b5a44b761e1549400ca50ad845ea WatchSource:0}: Error finding container 5696e2a08c8067b033f203d5a12c63e59190b5a44b761e1549400ca50ad845ea: Status 404 returned error can't find the container with id 5696e2a08c8067b033f203d5a12c63e59190b5a44b761e1549400ca50ad845ea Mar 13 12:51:20.649387 master-0 kubenswrapper[19715]: I0313 12:51:20.649335 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649669 master-0 kubenswrapper[19715]: I0313 12:51:20.649636 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649734 master-0 kubenswrapper[19715]: I0313 12:51:20.649719 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649784 master-0 kubenswrapper[19715]: I0313 12:51:20.649748 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649784 master-0 kubenswrapper[19715]: I0313 12:51:20.649775 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649841 master-0 kubenswrapper[19715]: I0313 12:51:20.649815 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649878 master-0 kubenswrapper[19715]: I0313 12:51:20.649840 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.649939 master-0 kubenswrapper[19715]: I0313 12:51:20.649916 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650015 master-0 kubenswrapper[19715]: I0313 12:51:20.649947 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650015 master-0 kubenswrapper[19715]: I0313 12:51:20.649979 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650085 master-0 kubenswrapper[19715]: I0313 12:51:20.650027 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650085 master-0 kubenswrapper[19715]: I0313 12:51:20.650069 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650144 master-0 kubenswrapper[19715]: I0313 12:51:20.650116 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5m4l\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650192 master-0 kubenswrapper[19715]: I0313 12:51:20.650162 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650294 master-0 kubenswrapper[19715]: I0313 12:51:20.650190 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650294 master-0 kubenswrapper[19715]: I0313 12:51:20.650225 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650294 master-0 kubenswrapper[19715]: I0313 12:51:20.650285 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650384 master-0 kubenswrapper[19715]: I0313 12:51:20.650352 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.650933 master-0 kubenswrapper[19715]: I0313 12:51:20.650895 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.651008 master-0 kubenswrapper[19715]: I0313 12:51:20.650928 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.651008 master-0 kubenswrapper[19715]: I0313 12:51:20.650917 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.653755 master-0 kubenswrapper[19715]: I0313 12:51:20.653721 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.660475 master-0 kubenswrapper[19715]: I0313 12:51:20.660209 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.663410 master-0 kubenswrapper[19715]: I0313 12:51:20.663375 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-ddjx7"] Mar 13 12:51:20.663951 master-0 kubenswrapper[19715]: I0313 12:51:20.663898 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.664859 master-0 kubenswrapper[19715]: I0313 12:51:20.664801 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.666141 master-0 kubenswrapper[19715]: I0313 12:51:20.665253 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.666141 master-0 kubenswrapper[19715]: I0313 12:51:20.665565 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.666141 master-0 kubenswrapper[19715]: I0313 12:51:20.665881 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.666141 master-0 kubenswrapper[19715]: I0313 12:51:20.665975 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.666418 master-0 kubenswrapper[19715]: I0313 12:51:20.666133 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.667268 master-0 kubenswrapper[19715]: I0313 12:51:20.667229 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.667268 master-0 kubenswrapper[19715]: I0313 12:51:20.667249 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.667436 master-0 kubenswrapper[19715]: I0313 12:51:20.667407 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.667538 master-0 kubenswrapper[19715]: I0313 12:51:20.667468 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.674133 master-0 kubenswrapper[19715]: I0313 12:51:20.669963 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.679018 master-0 kubenswrapper[19715]: W0313 12:51:20.678925 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcbd86a78_769d_4abc_b02d_48d52d9937c4.slice/crio-eb7539d5a3d9186b280426c0361a1def894c5a57b4695a59967e567ce2e98d29 WatchSource:0}: Error finding container eb7539d5a3d9186b280426c0361a1def894c5a57b4695a59967e567ce2e98d29: Status 404 returned error can't find the container with id eb7539d5a3d9186b280426c0361a1def894c5a57b4695a59967e567ce2e98d29 Mar 13 12:51:20.682568 master-0 kubenswrapper[19715]: I0313 12:51:20.682535 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5m4l\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l\") pod \"prometheus-k8s-0\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:20.988745 master-0 kubenswrapper[19715]: I0313 12:51:20.988609 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:21.149507 master-0 kubenswrapper[19715]: I0313 12:51:21.149435 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" event={"ID":"6db3e185-395c-4d94-82a0-fb14978f626d","Type":"ContainerStarted","Data":"00d68057ed4c3e09410674b18f996e1b19348db9c22c794a51b5d2b0bb3a95aa"} Mar 13 12:51:21.169159 master-0 kubenswrapper[19715]: I0313 12:51:21.169074 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" event={"ID":"12f7a988-944b-4adf-9be7-7e41a28e56bc","Type":"ContainerStarted","Data":"261724ae0e10bb0c8ae144dc00cc80c617d239ec606d54f165b258ebcc625260"} Mar 13 12:51:21.169397 master-0 kubenswrapper[19715]: I0313 12:51:21.169172 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" event={"ID":"12f7a988-944b-4adf-9be7-7e41a28e56bc","Type":"ContainerStarted","Data":"5696e2a08c8067b033f203d5a12c63e59190b5a44b761e1549400ca50ad845ea"} Mar 13 12:51:21.170014 master-0 kubenswrapper[19715]: I0313 12:51:21.169955 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:21.179962 master-0 kubenswrapper[19715]: I0313 12:51:21.179861 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" event={"ID":"badf8d0b-f96a-4919-aea5-a6510a2a2c03","Type":"ContainerStarted","Data":"99ec3c1e1eb684915dd6cf1323f8b9bab9ac48dc109030781076f3020054c009"} Mar 13 12:51:21.189719 master-0 kubenswrapper[19715]: I0313 12:51:21.189649 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-djlbx" event={"ID":"74f20dbd-f800-4aab-8263-1bc2395c8123","Type":"ContainerStarted","Data":"0a6b7a4d07f4a823def9cdfe0c84c8804f6f268049f0964584c66ab8adb2dc48"} Mar 13 12:51:21.191566 master-0 kubenswrapper[19715]: I0313 12:51:21.191451 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-vwks2" podStartSLOduration=6.04683503 podStartE2EDuration="8.191430187s" podCreationTimestamp="2026-03-13 12:51:13 +0000 UTC" firstStartedPulling="2026-03-13 12:51:16.996588019 +0000 UTC m=+103.563260776" lastFinishedPulling="2026-03-13 12:51:19.141183166 +0000 UTC m=+105.707855933" observedRunningTime="2026-03-13 12:51:21.19026764 +0000 UTC m=+107.756940397" watchObservedRunningTime="2026-03-13 12:51:21.191430187 +0000 UTC m=+107.758102944" Mar 13 12:51:21.193519 master-0 kubenswrapper[19715]: I0313 12:51:21.193490 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" event={"ID":"e9b0a016-5a0f-49e5-a4f1-687da89b6408","Type":"ContainerStarted","Data":"8b7a1757cfacb25550b6cc450b282b4846e6daf5e0bcded487a3553ac5a9f5be"} Mar 13 12:51:21.196094 master-0 kubenswrapper[19715]: I0313 12:51:21.196016 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" event={"ID":"6cca39b9-d6c1-486d-a286-6744d0a063bc","Type":"ContainerStarted","Data":"9595f2807e005df32f147f9d32de214956a89ab5460a7077a9f23fb9b37f3d85"} Mar 13 12:51:21.201751 master-0 kubenswrapper[19715]: I0313 12:51:21.201713 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" event={"ID":"f7495ca2-ee01-46f5-b210-5957f546270b","Type":"ContainerStarted","Data":"3d99462e5bcf28a2a50c1e92dc5950e0a7f9b6958f9aa7411c01ea5850cf16d0"} Mar 13 12:51:21.202868 master-0 kubenswrapper[19715]: I0313 12:51:21.202846 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" event={"ID":"f7495ca2-ee01-46f5-b210-5957f546270b","Type":"ContainerStarted","Data":"230eac6fc5e89d0a845e85c320c825f2c01021dda0c107c80175ec48d35ee03d"} Mar 13 12:51:21.219698 master-0 kubenswrapper[19715]: I0313 12:51:21.217834 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ddjx7" event={"ID":"cbd86a78-769d-4abc-b02d-48d52d9937c4","Type":"ContainerStarted","Data":"db8f15a66f01561e9abb1390dee5f5a77386ad9e7f022b03ed8daa013d78471d"} Mar 13 12:51:21.219698 master-0 kubenswrapper[19715]: I0313 12:51:21.217897 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-ddjx7" event={"ID":"cbd86a78-769d-4abc-b02d-48d52d9937c4","Type":"ContainerStarted","Data":"eb7539d5a3d9186b280426c0361a1def894c5a57b4695a59967e567ce2e98d29"} Mar 13 12:51:21.228329 master-0 kubenswrapper[19715]: I0313 12:51:21.224922 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-djlbx" podStartSLOduration=5.383470678 podStartE2EDuration="8.224895376s" podCreationTimestamp="2026-03-13 12:51:13 +0000 UTC" firstStartedPulling="2026-03-13 12:51:13.701458951 +0000 UTC m=+100.268131708" lastFinishedPulling="2026-03-13 12:51:16.542883649 +0000 UTC m=+103.109556406" observedRunningTime="2026-03-13 12:51:21.222322425 +0000 UTC m=+107.788995212" watchObservedRunningTime="2026-03-13 12:51:21.224895376 +0000 UTC m=+107.791568133" Mar 13 12:51:21.258915 master-0 kubenswrapper[19715]: I0313 12:51:21.258055 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" podStartSLOduration=6.258028854 podStartE2EDuration="6.258028854s" podCreationTimestamp="2026-03-13 12:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:21.243969193 +0000 UTC m=+107.810641950" watchObservedRunningTime="2026-03-13 12:51:21.258028854 +0000 UTC m=+107.824701611" Mar 13 12:51:21.321895 master-0 kubenswrapper[19715]: I0313 12:51:21.320479 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-lz46x" podStartSLOduration=4.773665838 podStartE2EDuration="8.320448411s" podCreationTimestamp="2026-03-13 12:51:13 +0000 UTC" firstStartedPulling="2026-03-13 12:51:14.877962796 +0000 UTC m=+101.444635553" lastFinishedPulling="2026-03-13 12:51:18.424745369 +0000 UTC m=+104.991418126" observedRunningTime="2026-03-13 12:51:21.293096464 +0000 UTC m=+107.859769241" watchObservedRunningTime="2026-03-13 12:51:21.320448411 +0000 UTC m=+107.887121158" Mar 13 12:51:21.335744 master-0 kubenswrapper[19715]: I0313 12:51:21.334581 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-ddjx7" podStartSLOduration=34.334556163 podStartE2EDuration="34.334556163s" podCreationTimestamp="2026-03-13 12:50:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:21.317147767 +0000 UTC m=+107.883820534" watchObservedRunningTime="2026-03-13 12:51:21.334556163 +0000 UTC m=+107.901228920" Mar 13 12:51:21.359337 master-0 kubenswrapper[19715]: I0313 12:51:21.359273 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-56867db46-krp8x" Mar 13 12:51:21.788537 master-0 kubenswrapper[19715]: I0313 12:51:21.788461 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l2xgj_cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd/kube-multus-additional-cni-plugins/0.log" Mar 13 12:51:21.788881 master-0 kubenswrapper[19715]: I0313 12:51:21.788606 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:51:21.883966 master-0 kubenswrapper[19715]: I0313 12:51:21.881699 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:51:21.899384 master-0 kubenswrapper[19715]: W0313 12:51:21.899290 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f9cfe2_048d_42c1_bd6c_30ab66b713d1.slice/crio-a6d4c32d41120afcac0120743281b178782a2ee49a010b4d81fdaac2526d5db1 WatchSource:0}: Error finding container a6d4c32d41120afcac0120743281b178782a2ee49a010b4d81fdaac2526d5db1: Status 404 returned error can't find the container with id a6d4c32d41120afcac0120743281b178782a2ee49a010b4d81fdaac2526d5db1 Mar 13 12:51:21.929803 master-0 kubenswrapper[19715]: I0313 12:51:21.929700 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xxc7\" (UniqueName: \"kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7\") pod \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " Mar 13 12:51:21.930110 master-0 kubenswrapper[19715]: I0313 12:51:21.929837 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready\") pod \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " Mar 13 12:51:21.930110 master-0 kubenswrapper[19715]: I0313 12:51:21.929932 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist\") pod \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " Mar 13 12:51:21.930110 master-0 kubenswrapper[19715]: I0313 12:51:21.929966 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir\") pod \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\" (UID: \"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd\") " Mar 13 12:51:21.930421 master-0 kubenswrapper[19715]: I0313 12:51:21.930301 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" (UID: "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:21.932590 master-0 kubenswrapper[19715]: I0313 12:51:21.932496 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" (UID: "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:51:21.932831 master-0 kubenswrapper[19715]: I0313 12:51:21.932794 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready" (OuterVolumeSpecName: "ready") pod "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" (UID: "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:51:21.935101 master-0 kubenswrapper[19715]: I0313 12:51:21.935066 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7" (OuterVolumeSpecName: "kube-api-access-8xxc7") pod "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" (UID: "cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd"). InnerVolumeSpecName "kube-api-access-8xxc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:22.032625 master-0 kubenswrapper[19715]: I0313 12:51:22.032513 19715 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:22.032625 master-0 kubenswrapper[19715]: I0313 12:51:22.032555 19715 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:22.032625 master-0 kubenswrapper[19715]: I0313 12:51:22.032568 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xxc7\" (UniqueName: \"kubernetes.io/projected/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-kube-api-access-8xxc7\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:22.032625 master-0 kubenswrapper[19715]: I0313 12:51:22.032579 19715 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd-ready\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:22.466382 master-0 kubenswrapper[19715]: I0313 12:51:22.463938 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"a6d4c32d41120afcac0120743281b178782a2ee49a010b4d81fdaac2526d5db1"} Mar 13 12:51:22.488420 master-0 kubenswrapper[19715]: I0313 12:51:22.488308 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-l2xgj_cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd/kube-multus-additional-cni-plugins/0.log" Mar 13 12:51:22.489668 master-0 kubenswrapper[19715]: I0313 12:51:22.488506 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" event={"ID":"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd","Type":"ContainerDied","Data":"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c"} Mar 13 12:51:22.489668 master-0 kubenswrapper[19715]: I0313 12:51:22.488717 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" Mar 13 12:51:22.492789 master-0 kubenswrapper[19715]: I0313 12:51:22.488427 19715 generic.go:334] "Generic (PLEG): container finished" podID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" exitCode=137 Mar 13 12:51:22.493107 master-0 kubenswrapper[19715]: I0313 12:51:22.493036 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-l2xgj" event={"ID":"cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd","Type":"ContainerDied","Data":"b9a6d8d694b1ba6b438559e0e897eb4939b3a5692ebb952a3d8db17b6e0a3186"} Mar 13 12:51:22.493183 master-0 kubenswrapper[19715]: I0313 12:51:22.493131 19715 scope.go:117] "RemoveContainer" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" Mar 13 12:51:22.545005 master-0 kubenswrapper[19715]: I0313 12:51:22.544761 19715 scope.go:117] "RemoveContainer" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" Mar 13 12:51:22.559692 master-0 kubenswrapper[19715]: E0313 12:51:22.549426 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c\": container with ID starting with 039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c not found: ID does not exist" containerID="039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c" Mar 13 12:51:22.559692 master-0 kubenswrapper[19715]: I0313 12:51:22.549541 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c"} err="failed to get container status \"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c\": rpc error: code = NotFound desc = could not find container \"039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c\": container with ID starting with 039bde329319820cd7ed6fe3a7b8cd869300a634db3a01ba0aa6d218c801bb4c not found: ID does not exist" Mar 13 12:51:22.576770 master-0 kubenswrapper[19715]: I0313 12:51:22.576022 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l2xgj"] Mar 13 12:51:22.584978 master-0 kubenswrapper[19715]: I0313 12:51:22.584886 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-l2xgj"] Mar 13 12:51:23.514760 master-0 kubenswrapper[19715]: I0313 12:51:23.514046 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="8673458808c732fe7f95b1d4faa7c40036a20d9b20d978c9e0314fa25bae2c05" exitCode=0 Mar 13 12:51:23.515654 master-0 kubenswrapper[19715]: I0313 12:51:23.514935 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"8673458808c732fe7f95b1d4faa7c40036a20d9b20d978c9e0314fa25bae2c05"} Mar 13 12:51:23.604288 master-0 kubenswrapper[19715]: E0313 12:51:23.604213 19715 configmap.go:193] Couldn't get configMap openshift-monitoring/prometheus-k8s-rulefiles-0: configmap "prometheus-k8s-rulefiles-0" not found Mar 13 12:51:23.604545 master-0 kubenswrapper[19715]: E0313 12:51:23.604322 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0 podName:50f9cfe2-048d-42c1-bd6c-30ab66b713d1 nodeName:}" failed. No retries permitted until 2026-03-13 12:51:24.104285602 +0000 UTC m=+110.670958359 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-k8s-rulefiles-0" (UniqueName: "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0") pod "prometheus-k8s-0" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1") : configmap "prometheus-k8s-rulefiles-0" not found Mar 13 12:51:23.745965 master-0 kubenswrapper[19715]: I0313 12:51:23.741330 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" path="/var/lib/kubelet/pods/cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd/volumes" Mar 13 12:51:26.233295 master-0 kubenswrapper[19715]: I0313 12:51:26.233215 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:51:27.710410 master-0 kubenswrapper[19715]: I0313 12:51:27.710344 19715 scope.go:117] "RemoveContainer" containerID="e7eef51e1851d4064dd3414fbc07997689fa5175ccbd02a52aec36eb5b2d0dd9" Mar 13 12:51:30.745665 master-0 kubenswrapper[19715]: I0313 12:51:30.745600 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:30.746811 master-0 kubenswrapper[19715]: E0313 12:51:30.746786 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:30.746961 master-0 kubenswrapper[19715]: I0313 12:51:30.746941 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:30.747265 master-0 kubenswrapper[19715]: I0313 12:51:30.747241 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbe4f7d0-f025-4d28-a3b4-c4f942b6d3bd" containerName="kube-multus-additional-cni-plugins" Mar 13 12:51:30.748193 master-0 kubenswrapper[19715]: I0313 12:51:30.748164 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.758074 master-0 kubenswrapper[19715]: I0313 12:51:30.758010 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:51:30.758364 master-0 kubenswrapper[19715]: I0313 12:51:30.758289 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-phtzh" Mar 13 12:51:30.772226 master-0 kubenswrapper[19715]: I0313 12:51:30.772153 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:30.815870 master-0 kubenswrapper[19715]: I0313 12:51:30.815811 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.816113 master-0 kubenswrapper[19715]: I0313 12:51:30.816003 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.816113 master-0 kubenswrapper[19715]: I0313 12:51:30.816066 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.917617 master-0 kubenswrapper[19715]: I0313 12:51:30.917519 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.917617 master-0 kubenswrapper[19715]: I0313 12:51:30.917599 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.917936 master-0 kubenswrapper[19715]: I0313 12:51:30.917694 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.917936 master-0 kubenswrapper[19715]: I0313 12:51:30.917818 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.917936 master-0 kubenswrapper[19715]: I0313 12:51:30.917838 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:30.934740 master-0 kubenswrapper[19715]: I0313 12:51:30.934630 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access\") pod \"installer-2-master-0\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:31.083719 master-0 kubenswrapper[19715]: I0313 12:51:31.083651 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:51:31.622367 master-0 kubenswrapper[19715]: I0313 12:51:31.622274 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110"} Mar 13 12:51:31.626320 master-0 kubenswrapper[19715]: I0313 12:51:31.626269 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" event={"ID":"badf8d0b-f96a-4919-aea5-a6510a2a2c03","Type":"ContainerStarted","Data":"b40343b7347b8405fdf26967937bbcb60c5936548971e0b7b37e84d0e702656b"} Mar 13 12:51:31.626431 master-0 kubenswrapper[19715]: I0313 12:51:31.626328 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" event={"ID":"badf8d0b-f96a-4919-aea5-a6510a2a2c03","Type":"ContainerStarted","Data":"7a34a88aaf238428a707967a26e575b77204daae9982cdb694153b043d769b6f"} Mar 13 12:51:31.652409 master-0 kubenswrapper[19715]: I0313 12:51:31.652358 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/2.log" Mar 13 12:51:31.652642 master-0 kubenswrapper[19715]: I0313 12:51:31.652456 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" event={"ID":"572e278b-c463-49b0-a198-49bd9e2c288c","Type":"ContainerStarted","Data":"5442e4d4d1ac02a16b69799221b9d7df6207e721b455ba40e0a52f37a385c4fe"} Mar 13 12:51:31.656813 master-0 kubenswrapper[19715]: I0313 12:51:31.655903 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:51:31.778381 master-0 kubenswrapper[19715]: I0313 12:51:31.778282 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"977bafc9f83fb4271cc50e43bc064514dc49a650160c446828dfd474cba19c8b"} Mar 13 12:51:31.805945 master-0 kubenswrapper[19715]: I0313 12:51:31.805879 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"db28e72ee76f6248081c6b0cd2d0049b50f05ccb5715813300f14a61e185f156"} Mar 13 12:51:31.805945 master-0 kubenswrapper[19715]: I0313 12:51:31.805941 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" event={"ID":"e9b0a016-5a0f-49e5-a4f1-687da89b6408","Type":"ContainerStarted","Data":"a20c79ae6d62b40686e966cbd9ce659a5781a495dd6d6e259b8deaf983e80e90"} Mar 13 12:51:31.806275 master-0 kubenswrapper[19715]: I0313 12:51:31.805964 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:31.806275 master-0 kubenswrapper[19715]: I0313 12:51:31.806034 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" event={"ID":"6cca39b9-d6c1-486d-a286-6744d0a063bc","Type":"ContainerStarted","Data":"bcf168583630c5b740112f5ffab1230a0a0aeba9802db82bfa8a465ff45ec533"} Mar 13 12:51:31.815552 master-0 kubenswrapper[19715]: I0313 12:51:31.815515 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" Mar 13 12:51:31.816047 master-0 kubenswrapper[19715]: I0313 12:51:31.815814 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7"} Mar 13 12:51:31.821957 master-0 kubenswrapper[19715]: I0313 12:51:31.821860 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" podStartSLOduration=46.925599796 podStartE2EDuration="49.821805441s" podCreationTimestamp="2026-03-13 12:50:42 +0000 UTC" firstStartedPulling="2026-03-13 12:50:43.505233591 +0000 UTC m=+70.071906348" lastFinishedPulling="2026-03-13 12:50:46.401439236 +0000 UTC m=+72.968111993" observedRunningTime="2026-03-13 12:51:31.802334041 +0000 UTC m=+118.369006798" watchObservedRunningTime="2026-03-13 12:51:31.821805441 +0000 UTC m=+118.388478198" Mar 13 12:51:31.824416 master-0 kubenswrapper[19715]: I0313 12:51:31.824378 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 12:51:31.855765 master-0 kubenswrapper[19715]: I0313 12:51:31.851291 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6f8b57985f-t4whs" podStartSLOduration=3.149360188 podStartE2EDuration="13.851265794s" podCreationTimestamp="2026-03-13 12:51:18 +0000 UTC" firstStartedPulling="2026-03-13 12:51:20.259093935 +0000 UTC m=+106.825766692" lastFinishedPulling="2026-03-13 12:51:30.960999541 +0000 UTC m=+117.527672298" observedRunningTime="2026-03-13 12:51:31.837893405 +0000 UTC m=+118.404566182" watchObservedRunningTime="2026-03-13 12:51:31.851265794 +0000 UTC m=+118.417938561" Mar 13 12:51:31.895626 master-0 kubenswrapper[19715]: W0313 12:51:31.895529 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-podcc6e9ceb_c6bf_409f_b515_b441a94db482.slice/crio-9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b WatchSource:0}: Error finding container 9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b: Status 404 returned error can't find the container with id 9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b Mar 13 12:51:31.944604 master-0 kubenswrapper[19715]: I0313 12:51:31.942164 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" podStartSLOduration=3.364506262 podStartE2EDuration="13.942136453s" podCreationTimestamp="2026-03-13 12:51:18 +0000 UTC" firstStartedPulling="2026-03-13 12:51:20.383397481 +0000 UTC m=+106.950070238" lastFinishedPulling="2026-03-13 12:51:30.961027672 +0000 UTC m=+117.527700429" observedRunningTime="2026-03-13 12:51:31.922709024 +0000 UTC m=+118.489381791" watchObservedRunningTime="2026-03-13 12:51:31.942136453 +0000 UTC m=+118.508809210" Mar 13 12:51:32.046438 master-0 kubenswrapper[19715]: I0313 12:51:32.046195 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-qhg45" Mar 13 12:51:32.420071 master-0 kubenswrapper[19715]: I0313 12:51:32.416617 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-nz574"] Mar 13 12:51:32.420071 master-0 kubenswrapper[19715]: I0313 12:51:32.419208 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:51:32.431525 master-0 kubenswrapper[19715]: I0313 12:51:32.431455 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-gbnht" Mar 13 12:51:32.431828 master-0 kubenswrapper[19715]: I0313 12:51:32.431529 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 12:51:32.431828 master-0 kubenswrapper[19715]: I0313 12:51:32.431732 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 12:51:32.475880 master-0 kubenswrapper[19715]: I0313 12:51:32.466452 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-nz574"] Mar 13 12:51:32.531051 master-0 kubenswrapper[19715]: I0313 12:51:32.517114 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd8lb\" (UniqueName: \"kubernetes.io/projected/a64d9c42-4a0b-472a-955a-4edab6b33210-kube-api-access-sd8lb\") pod \"downloads-84f57b9877-nz574\" (UID: \"a64d9c42-4a0b-472a-955a-4edab6b33210\") " pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:51:32.625184 master-0 kubenswrapper[19715]: I0313 12:51:32.619778 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd8lb\" (UniqueName: \"kubernetes.io/projected/a64d9c42-4a0b-472a-955a-4edab6b33210-kube-api-access-sd8lb\") pod \"downloads-84f57b9877-nz574\" (UID: \"a64d9c42-4a0b-472a-955a-4edab6b33210\") " pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:51:32.639214 master-0 kubenswrapper[19715]: I0313 12:51:32.639124 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd8lb\" (UniqueName: \"kubernetes.io/projected/a64d9c42-4a0b-472a-955a-4edab6b33210-kube-api-access-sd8lb\") pod \"downloads-84f57b9877-nz574\" (UID: \"a64d9c42-4a0b-472a-955a-4edab6b33210\") " pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:51:32.791450 master-0 kubenswrapper[19715]: I0313 12:51:32.790665 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:51:32.868087 master-0 kubenswrapper[19715]: I0313 12:51:32.867815 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"394476f4de2eb5477b8f78919726c06de4a0720a783079391349dd5d72e44469"} Mar 13 12:51:32.868087 master-0 kubenswrapper[19715]: I0313 12:51:32.867937 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"9463824c4754254b1aa46bbaffa14766bfc6621f106bd61ec45a341f649f56be"} Mar 13 12:51:32.868087 master-0 kubenswrapper[19715]: I0313 12:51:32.867953 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"b5d41bd31df12d6bde4b88fd262f4bd668a988bb2fa111efac5ce9627109d651"} Mar 13 12:51:32.871290 master-0 kubenswrapper[19715]: I0313 12:51:32.869925 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"cc6e9ceb-c6bf-409f-b515-b441a94db482","Type":"ContainerStarted","Data":"a06ff4033f4b0ff4cea4546690c4b01d7db25e90d2f3498981f6c1d007061576"} Mar 13 12:51:32.871290 master-0 kubenswrapper[19715]: I0313 12:51:32.870020 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"cc6e9ceb-c6bf-409f-b515-b441a94db482","Type":"ContainerStarted","Data":"9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b"} Mar 13 12:51:32.879972 master-0 kubenswrapper[19715]: I0313 12:51:32.874056 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5"} Mar 13 12:51:32.879972 master-0 kubenswrapper[19715]: I0313 12:51:32.874105 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41"} Mar 13 12:51:32.879972 master-0 kubenswrapper[19715]: I0313 12:51:32.874114 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614"} Mar 13 12:51:32.880489 master-0 kubenswrapper[19715]: I0313 12:51:32.879977 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" event={"ID":"badf8d0b-f96a-4919-aea5-a6510a2a2c03","Type":"ContainerStarted","Data":"39ac853291d5ec75c660e3822ac7d8d4a261310906712cf99231e2cfbabfc481"} Mar 13 12:51:32.885751 master-0 kubenswrapper[19715]: I0313 12:51:32.885300 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"a06ecff3b5dc2e72e30f44c307c37744deff13899df8db07c9cd4d0e609f1b88"} Mar 13 12:51:32.912504 master-0 kubenswrapper[19715]: I0313 12:51:32.910973 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.910927787 podStartE2EDuration="2.910927787s" podCreationTimestamp="2026-03-13 12:51:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:32.906252011 +0000 UTC m=+119.472924778" watchObservedRunningTime="2026-03-13 12:51:32.910927787 +0000 UTC m=+119.477600544" Mar 13 12:51:32.949297 master-0 kubenswrapper[19715]: I0313 12:51:32.947564 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7fb9979c45-qlpfr" podStartSLOduration=4.429989777 podStartE2EDuration="14.947537705s" podCreationTimestamp="2026-03-13 12:51:18 +0000 UTC" firstStartedPulling="2026-03-13 12:51:20.44079663 +0000 UTC m=+107.007469397" lastFinishedPulling="2026-03-13 12:51:30.958344568 +0000 UTC m=+117.525017325" observedRunningTime="2026-03-13 12:51:32.938990297 +0000 UTC m=+119.505663074" watchObservedRunningTime="2026-03-13 12:51:32.947537705 +0000 UTC m=+119.514210472" Mar 13 12:51:33.447829 master-0 kubenswrapper[19715]: I0313 12:51:33.447762 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-nz574"] Mar 13 12:51:33.471888 master-0 kubenswrapper[19715]: W0313 12:51:33.471832 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda64d9c42_4a0b_472a_955a_4edab6b33210.slice/crio-d45aa6c3e5ea2c7fd362eb83fe1216557b8a4e738d0b65abad83648f2d2df37e WatchSource:0}: Error finding container d45aa6c3e5ea2c7fd362eb83fe1216557b8a4e738d0b65abad83648f2d2df37e: Status 404 returned error can't find the container with id d45aa6c3e5ea2c7fd362eb83fe1216557b8a4e738d0b65abad83648f2d2df37e Mar 13 12:51:33.898869 master-0 kubenswrapper[19715]: I0313 12:51:33.898789 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-nz574" event={"ID":"a64d9c42-4a0b-472a-955a-4edab6b33210","Type":"ContainerStarted","Data":"d45aa6c3e5ea2c7fd362eb83fe1216557b8a4e738d0b65abad83648f2d2df37e"} Mar 13 12:51:33.911048 master-0 kubenswrapper[19715]: I0313 12:51:33.910490 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"ce6cbac40e6ff87a6089e3a83a73ca630aed4d9a458d7e6426b09c49aa6ea84d"} Mar 13 12:51:33.911048 master-0 kubenswrapper[19715]: I0313 12:51:33.910535 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerStarted","Data":"5130b2fe955358251a5f9c45b6699a17c0118abcb7d3a20a19463e82bc49a603"} Mar 13 12:51:33.929487 master-0 kubenswrapper[19715]: I0313 12:51:33.929380 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52"} Mar 13 12:51:33.961190 master-0 kubenswrapper[19715]: I0313 12:51:33.961076 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=6.454817597 podStartE2EDuration="13.961047961s" podCreationTimestamp="2026-03-13 12:51:20 +0000 UTC" firstStartedPulling="2026-03-13 12:51:23.516539203 +0000 UTC m=+110.083211960" lastFinishedPulling="2026-03-13 12:51:31.022769567 +0000 UTC m=+117.589442324" observedRunningTime="2026-03-13 12:51:33.955381293 +0000 UTC m=+120.522054080" watchObservedRunningTime="2026-03-13 12:51:33.961047961 +0000 UTC m=+120.527720718" Mar 13 12:51:34.955630 master-0 kubenswrapper[19715]: I0313 12:51:34.954241 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerStarted","Data":"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad"} Mar 13 12:51:34.967227 master-0 kubenswrapper[19715]: I0313 12:51:34.967121 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"03d3a6ada5a644b0b5c6fdbcaaea182840d08116b3534ba70f272820d552a7fe"} Mar 13 12:51:34.967227 master-0 kubenswrapper[19715]: I0313 12:51:34.967224 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"c52657aefb4c9571e3a7efd7640c89ad739df49635c1b8c44777e7bc1a835024"} Mar 13 12:51:34.967519 master-0 kubenswrapper[19715]: I0313 12:51:34.967260 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" event={"ID":"d9e50ad5-6999-441a-86ef-d56e490d0d75","Type":"ContainerStarted","Data":"d2f54d7948337346946e82f7f27918be28acf48b85b500c44da5a4b0a6794992"} Mar 13 12:51:35.007620 master-0 kubenswrapper[19715]: I0313 12:51:35.007521 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=3.803339035 podStartE2EDuration="21.007482019s" podCreationTimestamp="2026-03-13 12:51:14 +0000 UTC" firstStartedPulling="2026-03-13 12:51:16.48803825 +0000 UTC m=+103.054711017" lastFinishedPulling="2026-03-13 12:51:33.692181244 +0000 UTC m=+120.258854001" observedRunningTime="2026-03-13 12:51:34.997913269 +0000 UTC m=+121.564586046" watchObservedRunningTime="2026-03-13 12:51:35.007482019 +0000 UTC m=+121.574154786" Mar 13 12:51:35.061858 master-0 kubenswrapper[19715]: I0313 12:51:35.060299 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" podStartSLOduration=3.208858428 podStartE2EDuration="19.060270153s" podCreationTimestamp="2026-03-13 12:51:16 +0000 UTC" firstStartedPulling="2026-03-13 12:51:17.843523275 +0000 UTC m=+104.410196032" lastFinishedPulling="2026-03-13 12:51:33.694935 +0000 UTC m=+120.261607757" observedRunningTime="2026-03-13 12:51:35.057485686 +0000 UTC m=+121.624158453" watchObservedRunningTime="2026-03-13 12:51:35.060270153 +0000 UTC m=+121.626942920" Mar 13 12:51:35.979344 master-0 kubenswrapper[19715]: I0313 12:51:35.979293 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:35.991719 master-0 kubenswrapper[19715]: I0313 12:51:35.990661 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:51:36.943850 master-0 kubenswrapper[19715]: I0313 12:51:36.943792 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-8fc4dc979-blhgb" Mar 13 12:51:39.357546 master-0 kubenswrapper[19715]: I0313 12:51:39.356808 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:39.357546 master-0 kubenswrapper[19715]: I0313 12:51:39.356896 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:40.000860 master-0 kubenswrapper[19715]: I0313 12:51:40.000804 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:51:40.001220 master-0 kubenswrapper[19715]: I0313 12:51:40.001149 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://dd9e5e8e374c81e1c66f6e45811bee38c8f529d7dd83812725266a3311710c8f" gracePeriod=30 Mar 13 12:51:40.001375 master-0 kubenswrapper[19715]: I0313 12:51:40.001274 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://4f7ff4562a79b8bd2c0cbb72f384270ed3c70b557b5276791fba9d8debdb7623" gracePeriod=30 Mar 13 12:51:40.002100 master-0 kubenswrapper[19715]: I0313 12:51:40.001935 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: E0313 12:51:40.002266 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002288 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: E0313 12:51:40.002313 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002322 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: E0313 12:51:40.002359 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002369 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: E0313 12:51:40.002413 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002423 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002599 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.002733 master-0 kubenswrapper[19715]: I0313 12:51:40.002639 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.003172 master-0 kubenswrapper[19715]: I0313 12:51:40.002799 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.003172 master-0 kubenswrapper[19715]: E0313 12:51:40.003032 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.003172 master-0 kubenswrapper[19715]: I0313 12:51:40.003049 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.003316 master-0 kubenswrapper[19715]: I0313 12:51:40.003241 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 12:51:40.003316 master-0 kubenswrapper[19715]: I0313 12:51:40.003279 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 12:51:40.005757 master-0 kubenswrapper[19715]: I0313 12:51:40.005712 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.075439 master-0 kubenswrapper[19715]: I0313 12:51:40.075355 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:51:40.196859 master-0 kubenswrapper[19715]: I0313 12:51:40.195849 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.196859 master-0 kubenswrapper[19715]: I0313 12:51:40.195933 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.298855 master-0 kubenswrapper[19715]: I0313 12:51:40.298752 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.299095 master-0 kubenswrapper[19715]: I0313 12:51:40.298909 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.299095 master-0 kubenswrapper[19715]: I0313 12:51:40.298979 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.299095 master-0 kubenswrapper[19715]: I0313 12:51:40.299083 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"e95e98146cf857064826636918715dbe\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.349616 master-0 kubenswrapper[19715]: I0313 12:51:40.337702 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:40.365712 master-0 kubenswrapper[19715]: I0313 12:51:40.363678 19715 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="14c64f96-b394-4ea6-8797-77f54e387c95" Mar 13 12:51:40.367410 master-0 kubenswrapper[19715]: I0313 12:51:40.367375 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:40.399920 master-0 kubenswrapper[19715]: I0313 12:51:40.399859 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400021 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400108 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400171 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400199 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400210 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400232 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400257 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400305 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:40.400383 master-0 kubenswrapper[19715]: I0313 12:51:40.400359 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:40.400769 master-0 kubenswrapper[19715]: I0313 12:51:40.400736 19715 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:40.400769 master-0 kubenswrapper[19715]: I0313 12:51:40.400752 19715 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:40.400769 master-0 kubenswrapper[19715]: I0313 12:51:40.400765 19715 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:40.400899 master-0 kubenswrapper[19715]: I0313 12:51:40.400777 19715 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:40.400899 master-0 kubenswrapper[19715]: I0313 12:51:40.400788 19715 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:40.405891 master-0 kubenswrapper[19715]: W0313 12:51:40.405835 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode95e98146cf857064826636918715dbe.slice/crio-1e49a55b19c7e5ed9084a631ff212f074a8a102bbe112738431e943115a97d66 WatchSource:0}: Error finding container 1e49a55b19c7e5ed9084a631ff212f074a8a102bbe112738431e943115a97d66: Status 404 returned error can't find the container with id 1e49a55b19c7e5ed9084a631ff212f074a8a102bbe112738431e943115a97d66 Mar 13 12:51:41.083250 master-0 kubenswrapper[19715]: I0313 12:51:41.083193 19715 generic.go:334] "Generic (PLEG): container finished" podID="787f8414-a607-4672-bf7f-6494b4250de1" containerID="2687ae6b014a5827eef79787820c82f3a426c8b755ef25f1712b51c3677d0ae1" exitCode=0 Mar 13 12:51:41.083556 master-0 kubenswrapper[19715]: I0313 12:51:41.083279 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"787f8414-a607-4672-bf7f-6494b4250de1","Type":"ContainerDied","Data":"2687ae6b014a5827eef79787820c82f3a426c8b755ef25f1712b51c3677d0ae1"} Mar 13 12:51:41.094568 master-0 kubenswrapper[19715]: I0313 12:51:41.094503 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b"} Mar 13 12:51:41.094568 master-0 kubenswrapper[19715]: I0313 12:51:41.094557 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} Mar 13 12:51:41.094568 master-0 kubenswrapper[19715]: I0313 12:51:41.094588 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"1e49a55b19c7e5ed9084a631ff212f074a8a102bbe112738431e943115a97d66"} Mar 13 12:51:41.097180 master-0 kubenswrapper[19715]: I0313 12:51:41.097135 19715 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="4f7ff4562a79b8bd2c0cbb72f384270ed3c70b557b5276791fba9d8debdb7623" exitCode=0 Mar 13 12:51:41.097180 master-0 kubenswrapper[19715]: I0313 12:51:41.097168 19715 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="dd9e5e8e374c81e1c66f6e45811bee38c8f529d7dd83812725266a3311710c8f" exitCode=0 Mar 13 12:51:41.097308 master-0 kubenswrapper[19715]: I0313 12:51:41.097210 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aca9698bc9cab04a59911456f78b517e7653af17b6c69e3d592b8b32239ec39d" Mar 13 12:51:41.097308 master-0 kubenswrapper[19715]: I0313 12:51:41.097235 19715 scope.go:117] "RemoveContainer" containerID="6645727a95fe38d57e5e3f91888d08002ce9c2539d9a9126739ad973b5b53c72" Mar 13 12:51:41.097484 master-0 kubenswrapper[19715]: I0313 12:51:41.097396 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 12:51:41.145226 master-0 kubenswrapper[19715]: I0313 12:51:41.145164 19715 scope.go:117] "RemoveContainer" containerID="5ae7ae35f7136762cbb13e8c36aee38aecdcf9e047584314d44cc6cd1301533e" Mar 13 12:51:41.715711 master-0 kubenswrapper[19715]: I0313 12:51:41.715641 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 13 12:51:41.716462 master-0 kubenswrapper[19715]: I0313 12:51:41.716291 19715 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 12:51:41.732645 master-0 kubenswrapper[19715]: I0313 12:51:41.732560 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:51:41.732645 master-0 kubenswrapper[19715]: I0313 12:51:41.732632 19715 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="14c64f96-b394-4ea6-8797-77f54e387c95" Mar 13 12:51:41.751454 master-0 kubenswrapper[19715]: I0313 12:51:41.746666 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 12:51:41.751454 master-0 kubenswrapper[19715]: I0313 12:51:41.746753 19715 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="14c64f96-b394-4ea6-8797-77f54e387c95" Mar 13 12:51:42.123624 master-0 kubenswrapper[19715]: I0313 12:51:42.123057 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597"} Mar 13 12:51:42.123624 master-0 kubenswrapper[19715]: I0313 12:51:42.123106 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc"} Mar 13 12:51:42.155711 master-0 kubenswrapper[19715]: I0313 12:51:42.153966 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.153944278 podStartE2EDuration="2.153944278s" podCreationTimestamp="2026-03-13 12:51:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:51:42.152814063 +0000 UTC m=+128.719486820" watchObservedRunningTime="2026-03-13 12:51:42.153944278 +0000 UTC m=+128.720617035" Mar 13 12:51:42.782626 master-0 kubenswrapper[19715]: I0313 12:51:42.782570 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:51:42.919101 master-0 kubenswrapper[19715]: I0313 12:51:42.918966 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir\") pod \"787f8414-a607-4672-bf7f-6494b4250de1\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " Mar 13 12:51:42.919420 master-0 kubenswrapper[19715]: I0313 12:51:42.919147 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access\") pod \"787f8414-a607-4672-bf7f-6494b4250de1\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " Mar 13 12:51:42.919420 master-0 kubenswrapper[19715]: I0313 12:51:42.919133 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "787f8414-a607-4672-bf7f-6494b4250de1" (UID: "787f8414-a607-4672-bf7f-6494b4250de1"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:42.919420 master-0 kubenswrapper[19715]: I0313 12:51:42.919171 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock\") pod \"787f8414-a607-4672-bf7f-6494b4250de1\" (UID: \"787f8414-a607-4672-bf7f-6494b4250de1\") " Mar 13 12:51:42.919420 master-0 kubenswrapper[19715]: I0313 12:51:42.919220 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock" (OuterVolumeSpecName: "var-lock") pod "787f8414-a607-4672-bf7f-6494b4250de1" (UID: "787f8414-a607-4672-bf7f-6494b4250de1"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:51:42.920043 master-0 kubenswrapper[19715]: I0313 12:51:42.919993 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:42.920043 master-0 kubenswrapper[19715]: I0313 12:51:42.920032 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/787f8414-a607-4672-bf7f-6494b4250de1-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:42.922782 master-0 kubenswrapper[19715]: I0313 12:51:42.922728 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "787f8414-a607-4672-bf7f-6494b4250de1" (UID: "787f8414-a607-4672-bf7f-6494b4250de1"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:51:43.022972 master-0 kubenswrapper[19715]: I0313 12:51:43.021363 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/787f8414-a607-4672-bf7f-6494b4250de1-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:51:43.137637 master-0 kubenswrapper[19715]: I0313 12:51:43.137420 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 12:51:43.138286 master-0 kubenswrapper[19715]: I0313 12:51:43.138254 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"787f8414-a607-4672-bf7f-6494b4250de1","Type":"ContainerDied","Data":"a6e2536b371d826f4b6e1106d8a7c2512343398a962e2f9ddabfa67f445087eb"} Mar 13 12:51:43.138286 master-0 kubenswrapper[19715]: I0313 12:51:43.138287 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a6e2536b371d826f4b6e1106d8a7c2512343398a962e2f9ddabfa67f445087eb" Mar 13 12:51:50.367905 master-0 kubenswrapper[19715]: I0313 12:51:50.367689 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:50.367905 master-0 kubenswrapper[19715]: I0313 12:51:50.367754 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:50.367905 master-0 kubenswrapper[19715]: I0313 12:51:50.367938 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:50.369525 master-0 kubenswrapper[19715]: I0313 12:51:50.368495 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:50.376714 master-0 kubenswrapper[19715]: I0313 12:51:50.376087 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:50.377619 master-0 kubenswrapper[19715]: I0313 12:51:50.377351 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:51.385532 master-0 kubenswrapper[19715]: I0313 12:51:51.385473 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:51.386479 master-0 kubenswrapper[19715]: I0313 12:51:51.386047 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:51:59.366871 master-0 kubenswrapper[19715]: I0313 12:51:59.366752 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:51:59.372709 master-0 kubenswrapper[19715]: I0313 12:51:59.372677 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-6b94c647f5-cmzc9" Mar 13 12:52:00.173003 master-0 kubenswrapper[19715]: I0313 12:52:00.172901 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-b649d7df7-lm9xz"] Mar 13 12:52:00.173448 master-0 kubenswrapper[19715]: E0313 12:52:00.173344 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="787f8414-a607-4672-bf7f-6494b4250de1" containerName="installer" Mar 13 12:52:00.173448 master-0 kubenswrapper[19715]: I0313 12:52:00.173384 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="787f8414-a607-4672-bf7f-6494b4250de1" containerName="installer" Mar 13 12:52:00.173721 master-0 kubenswrapper[19715]: I0313 12:52:00.173686 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="787f8414-a607-4672-bf7f-6494b4250de1" containerName="installer" Mar 13 12:52:00.174493 master-0 kubenswrapper[19715]: I0313 12:52:00.174448 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.177911 master-0 kubenswrapper[19715]: I0313 12:52:00.177853 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 12:52:00.178034 master-0 kubenswrapper[19715]: I0313 12:52:00.177921 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 12:52:00.180186 master-0 kubenswrapper[19715]: I0313 12:52:00.179199 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-mr4r4" Mar 13 12:52:00.180186 master-0 kubenswrapper[19715]: I0313 12:52:00.179946 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 12:52:00.180386 master-0 kubenswrapper[19715]: I0313 12:52:00.180190 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 12:52:00.184071 master-0 kubenswrapper[19715]: I0313 12:52:00.184020 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 12:52:00.186333 master-0 kubenswrapper[19715]: I0313 12:52:00.186289 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 12:52:00.207384 master-0 kubenswrapper[19715]: I0313 12:52:00.206350 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-b649d7df7-lm9xz"] Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358457 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct5vs\" (UniqueName: \"kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358542 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358570 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358634 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358658 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358679 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.359115 master-0 kubenswrapper[19715]: I0313 12:52:00.358694 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.460839 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.460935 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.460979 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.461006 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.461091 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct5vs\" (UniqueName: \"kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.461185 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.462809 master-0 kubenswrapper[19715]: I0313 12:52:00.461232 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.463775 master-0 kubenswrapper[19715]: I0313 12:52:00.463198 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.466458 master-0 kubenswrapper[19715]: I0313 12:52:00.464443 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.466458 master-0 kubenswrapper[19715]: I0313 12:52:00.464480 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.466911 master-0 kubenswrapper[19715]: I0313 12:52:00.466821 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.471342 master-0 kubenswrapper[19715]: I0313 12:52:00.470079 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.478655 master-0 kubenswrapper[19715]: I0313 12:52:00.477546 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.493980 master-0 kubenswrapper[19715]: I0313 12:52:00.493851 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct5vs\" (UniqueName: \"kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs\") pod \"console-b649d7df7-lm9xz\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:00.538632 master-0 kubenswrapper[19715]: I0313 12:52:00.528882 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:09.098615 master-0 kubenswrapper[19715]: I0313 12:52:09.091971 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 13 12:52:09.105072 master-0 kubenswrapper[19715]: I0313 12:52:09.104992 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.114197 master-0 kubenswrapper[19715]: I0313 12:52:09.112120 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-6-master-0"] Mar 13 12:52:09.114466 master-0 kubenswrapper[19715]: I0313 12:52:09.114232 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:52:09.117088 master-0 kubenswrapper[19715]: I0313 12:52:09.114918 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7gz29" Mar 13 12:52:09.276394 master-0 kubenswrapper[19715]: I0313 12:52:09.276213 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.276394 master-0 kubenswrapper[19715]: I0313 12:52:09.276306 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.276394 master-0 kubenswrapper[19715]: I0313 12:52:09.276352 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.377246 master-0 kubenswrapper[19715]: I0313 12:52:09.377094 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.377246 master-0 kubenswrapper[19715]: I0313 12:52:09.377163 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.377540 master-0 kubenswrapper[19715]: I0313 12:52:09.377280 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.379473 master-0 kubenswrapper[19715]: I0313 12:52:09.377796 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.379473 master-0 kubenswrapper[19715]: I0313 12:52:09.377851 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.420710 master-0 kubenswrapper[19715]: I0313 12:52:09.406493 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:09.450638 master-0 kubenswrapper[19715]: I0313 12:52:09.450542 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:12.700155 master-0 kubenswrapper[19715]: I0313 12:52:12.700051 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 13 12:52:12.701514 master-0 kubenswrapper[19715]: I0313 12:52:12.701486 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.712608 master-0 kubenswrapper[19715]: I0313 12:52:12.711632 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:52:12.715538 master-0 kubenswrapper[19715]: I0313 12:52:12.713156 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-jdg75" Mar 13 12:52:12.719856 master-0 kubenswrapper[19715]: I0313 12:52:12.719796 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0"] Mar 13 12:52:12.720043 master-0 kubenswrapper[19715]: I0313 12:52:12.719863 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:52:12.721225 master-0 kubenswrapper[19715]: I0313 12:52:12.721186 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:52:12.721528 master-0 kubenswrapper[19715]: I0313 12:52:12.721441 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.722326 master-0 kubenswrapper[19715]: I0313 12:52:12.722124 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" containerID="cri-o://3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" gracePeriod=15 Mar 13 12:52:12.722466 master-0 kubenswrapper[19715]: I0313 12:52:12.722429 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" containerID="cri-o://9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" gracePeriod=15 Mar 13 12:52:12.722517 master-0 kubenswrapper[19715]: I0313 12:52:12.722500 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" gracePeriod=15 Mar 13 12:52:12.722556 master-0 kubenswrapper[19715]: I0313 12:52:12.722543 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" gracePeriod=15 Mar 13 12:52:12.722809 master-0 kubenswrapper[19715]: I0313 12:52:12.722771 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" gracePeriod=15 Mar 13 12:52:12.723736 master-0 kubenswrapper[19715]: I0313 12:52:12.723703 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:52:12.724179 master-0 kubenswrapper[19715]: E0313 12:52:12.724142 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 12:52:12.724179 master-0 kubenswrapper[19715]: I0313 12:52:12.724177 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: E0313 12:52:12.724212 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: I0313 12:52:12.724221 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: E0313 12:52:12.724241 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: I0313 12:52:12.724250 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: E0313 12:52:12.724271 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: I0313 12:52:12.724279 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 12:52:12.724286 master-0 kubenswrapper[19715]: E0313 12:52:12.724292 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724302 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: E0313 12:52:12.724468 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="setup" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724477 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="setup" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: E0313 12:52:12.724500 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724508 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724752 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724784 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724801 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724821 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724858 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 12:52:12.725770 master-0 kubenswrapper[19715]: I0313 12:52:12.724875 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 12:52:12.745716 master-0 kubenswrapper[19715]: I0313 12:52:12.739945 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" podUID="4c3280e9367536f782caf8bdc07edb85" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813498 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813608 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813640 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813682 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813717 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813740 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.813758 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.814027 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.814104 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.814223 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.815747 master-0 kubenswrapper[19715]: I0313 12:52:12.814270 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.847521 master-0 kubenswrapper[19715]: E0313 12:52:12.847432 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.915542 master-0 kubenswrapper[19715]: I0313 12:52:12.915435 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915619 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915681 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915660 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915742 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915772 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915855 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.915934 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916018 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916057 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916018 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916134 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916170 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916203 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916269 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916282 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916322 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916332 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916359 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916383 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.916546 master-0 kubenswrapper[19715]: I0313 12:52:12.916422 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:12.919805 master-0 kubenswrapper[19715]: E0313 12:52:12.918018 19715 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-5-master-0: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:12.919805 master-0 kubenswrapper[19715]: E0313 12:52:12.918345 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access podName:2a0e239c-fe39-43af-8b0a-2964897d8b92 nodeName:}" failed. No retries permitted until 2026-03-13 12:52:13.418262492 +0000 UTC m=+159.984935249 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access") pod "installer-5-master-0" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:12.919805 master-0 kubenswrapper[19715]: E0313 12:52:12.919106 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{installer-5-master-0.189c67a9f404346b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-5-master-0,UID:2a0e239c-fe39-43af-8b0a-2964897d8b92,APIVersion:v1,ResourceVersion:14739,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access\" : failed to fetch token: Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token\": dial tcp 192.168.32.10:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:52:12.918183019 +0000 UTC m=+159.484855776,LastTimestamp:2026-03-13 12:52:12.918183019 +0000 UTC m=+159.484855776,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:52:13.150617 master-0 kubenswrapper[19715]: I0313 12:52:13.149719 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:13.423639 master-0 kubenswrapper[19715]: I0313 12:52:13.422835 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:13.424266 master-0 kubenswrapper[19715]: E0313 12:52:13.424238 19715 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-5-master-0: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:13.424533 master-0 kubenswrapper[19715]: E0313 12:52:13.424511 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access podName:2a0e239c-fe39-43af-8b0a-2964897d8b92 nodeName:}" failed. No retries permitted until 2026-03-13 12:52:14.424416267 +0000 UTC m=+160.991089024 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access") pod "installer-5-master-0" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:13.993007 master-0 kubenswrapper[19715]: E0313 12:52:13.992778 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:13Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:13Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:13Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:13Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:13.994173 master-0 kubenswrapper[19715]: E0313 12:52:13.994116 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:13.994928 master-0 kubenswrapper[19715]: E0313 12:52:13.994750 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:13.995306 master-0 kubenswrapper[19715]: E0313 12:52:13.995262 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:13.995782 master-0 kubenswrapper[19715]: E0313 12:52:13.995746 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:13.995782 master-0 kubenswrapper[19715]: E0313 12:52:13.995775 19715 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:52:14.451562 master-0 kubenswrapper[19715]: I0313 12:52:14.451236 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:14.453071 master-0 kubenswrapper[19715]: E0313 12:52:14.453028 19715 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-5-master-0: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:14.453149 master-0 kubenswrapper[19715]: E0313 12:52:14.453111 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access podName:2a0e239c-fe39-43af-8b0a-2964897d8b92 nodeName:}" failed. No retries permitted until 2026-03-13 12:52:16.453093448 +0000 UTC m=+163.019766205 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access") pod "installer-5-master-0" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:16.548180 master-0 kubenswrapper[19715]: I0313 12:52:16.548104 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:16.549469 master-0 kubenswrapper[19715]: E0313 12:52:16.549417 19715 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-5-master-0: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:16.549637 master-0 kubenswrapper[19715]: E0313 12:52:16.549554 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access podName:2a0e239c-fe39-43af-8b0a-2964897d8b92 nodeName:}" failed. No retries permitted until 2026-03-13 12:52:20.549522045 +0000 UTC m=+167.116194802 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access") pod "installer-5-master-0" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:18.742787 master-0 kubenswrapper[19715]: I0313 12:52:18.742727 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-check-endpoints/0.log" Mar 13 12:52:18.745771 master-0 kubenswrapper[19715]: I0313 12:52:18.745706 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 12:52:18.747079 master-0 kubenswrapper[19715]: I0313 12:52:18.747010 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:18.748000 master-0 kubenswrapper[19715]: I0313 12:52:18.747945 19715 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:18.756469 master-0 kubenswrapper[19715]: W0313 12:52:18.756402 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podacbb43bf2cf27ed60d1f635fd6638ac7.slice/crio-b0e4b62c1f6b79b228cbfed88f6c970e1779a99f80a8e5368d2481a973181881 WatchSource:0}: Error finding container b0e4b62c1f6b79b228cbfed88f6c970e1779a99f80a8e5368d2481a973181881: Status 404 returned error can't find the container with id b0e4b62c1f6b79b228cbfed88f6c970e1779a99f80a8e5368d2481a973181881 Mar 13 12:52:18.947661 master-0 kubenswrapper[19715]: I0313 12:52:18.947410 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 12:52:18.947661 master-0 kubenswrapper[19715]: I0313 12:52:18.947501 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 12:52:18.948043 master-0 kubenswrapper[19715]: I0313 12:52:18.947686 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 12:52:18.948043 master-0 kubenswrapper[19715]: I0313 12:52:18.947790 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:18.948185 master-0 kubenswrapper[19715]: I0313 12:52:18.948084 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:18.948185 master-0 kubenswrapper[19715]: I0313 12:52:18.948134 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:18.948412 master-0 kubenswrapper[19715]: I0313 12:52:18.948327 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:19.050161 master-0 kubenswrapper[19715]: I0313 12:52:19.050091 19715 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:19.050161 master-0 kubenswrapper[19715]: I0313 12:52:19.050157 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:19.208482 master-0 kubenswrapper[19715]: I0313 12:52:19.208383 19715 generic.go:334] "Generic (PLEG): container finished" podID="cc6e9ceb-c6bf-409f-b515-b441a94db482" containerID="a06ff4033f4b0ff4cea4546690c4b01d7db25e90d2f3498981f6c1d007061576" exitCode=0 Mar 13 12:52:19.208482 master-0 kubenswrapper[19715]: I0313 12:52:19.208457 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"cc6e9ceb-c6bf-409f-b515-b441a94db482","Type":"ContainerDied","Data":"a06ff4033f4b0ff4cea4546690c4b01d7db25e90d2f3498981f6c1d007061576"} Mar 13 12:52:19.210839 master-0 kubenswrapper[19715]: I0313 12:52:19.210769 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.211557 master-0 kubenswrapper[19715]: I0313 12:52:19.211514 19715 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.214034 master-0 kubenswrapper[19715]: I0313 12:52:19.213994 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"4b40357715494cbae0cee70bec112e496fcdeddba27e7b49134620a4e190c738"} Mar 13 12:52:19.214810 master-0 kubenswrapper[19715]: I0313 12:52:19.214048 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"acbb43bf2cf27ed60d1f635fd6638ac7","Type":"ContainerStarted","Data":"b0e4b62c1f6b79b228cbfed88f6c970e1779a99f80a8e5368d2481a973181881"} Mar 13 12:52:19.215541 master-0 kubenswrapper[19715]: E0313 12:52:19.215489 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:52:19.215757 master-0 kubenswrapper[19715]: I0313 12:52:19.215454 19715 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.218061 master-0 kubenswrapper[19715]: I0313 12:52:19.217179 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.220449 master-0 kubenswrapper[19715]: I0313 12:52:19.220045 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-nz574" event={"ID":"a64d9c42-4a0b-472a-955a-4edab6b33210","Type":"ContainerStarted","Data":"9ea4939b9e4e6fa4b716f11dfeb0d2d87c48bb67f351171e6b3c77ff827ce040"} Mar 13 12:52:19.223364 master-0 kubenswrapper[19715]: I0313 12:52:19.222365 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:52:19.223364 master-0 kubenswrapper[19715]: I0313 12:52:19.222458 19715 patch_prober.go:28] interesting pod/downloads-84f57b9877-nz574 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" start-of-body= Mar 13 12:52:19.223364 master-0 kubenswrapper[19715]: I0313 12:52:19.222500 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-nz574" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" Mar 13 12:52:19.223364 master-0 kubenswrapper[19715]: I0313 12:52:19.222948 19715 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.224123 master-0 kubenswrapper[19715]: I0313 12:52:19.224061 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.224850 master-0 kubenswrapper[19715]: I0313 12:52:19.224808 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.226454 master-0 kubenswrapper[19715]: I0313 12:52:19.226415 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-check-endpoints/0.log" Mar 13 12:52:19.228446 master-0 kubenswrapper[19715]: I0313 12:52:19.228412 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 12:52:19.229494 master-0 kubenswrapper[19715]: I0313 12:52:19.229469 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" exitCode=0 Mar 13 12:52:19.229494 master-0 kubenswrapper[19715]: I0313 12:52:19.229492 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" exitCode=0 Mar 13 12:52:19.229671 master-0 kubenswrapper[19715]: I0313 12:52:19.229501 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" exitCode=0 Mar 13 12:52:19.229671 master-0 kubenswrapper[19715]: I0313 12:52:19.229511 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" exitCode=2 Mar 13 12:52:19.229671 master-0 kubenswrapper[19715]: I0313 12:52:19.229519 19715 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" exitCode=0 Mar 13 12:52:19.229671 master-0 kubenswrapper[19715]: I0313 12:52:19.229623 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.229860 master-0 kubenswrapper[19715]: I0313 12:52:19.229804 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:19.257401 master-0 kubenswrapper[19715]: I0313 12:52:19.257292 19715 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.258076 master-0 kubenswrapper[19715]: I0313 12:52:19.258002 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.258550 master-0 kubenswrapper[19715]: I0313 12:52:19.258425 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:19.269757 master-0 kubenswrapper[19715]: I0313 12:52:19.269718 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.291927 master-0 kubenswrapper[19715]: I0313 12:52:19.291808 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.309311 master-0 kubenswrapper[19715]: I0313 12:52:19.309259 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.332095 master-0 kubenswrapper[19715]: I0313 12:52:19.332054 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.355173 master-0 kubenswrapper[19715]: I0313 12:52:19.355054 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.378047 master-0 kubenswrapper[19715]: I0313 12:52:19.377995 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.396622 master-0 kubenswrapper[19715]: I0313 12:52:19.396551 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.397238 master-0 kubenswrapper[19715]: E0313 12:52:19.397191 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.397377 master-0 kubenswrapper[19715]: I0313 12:52:19.397345 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} err="failed to get container status \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" Mar 13 12:52:19.397672 master-0 kubenswrapper[19715]: I0313 12:52:19.397526 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.398092 master-0 kubenswrapper[19715]: E0313 12:52:19.398064 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.398721 master-0 kubenswrapper[19715]: I0313 12:52:19.398687 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} err="failed to get container status \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" Mar 13 12:52:19.398904 master-0 kubenswrapper[19715]: I0313 12:52:19.398885 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.400905 master-0 kubenswrapper[19715]: E0313 12:52:19.400827 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.401021 master-0 kubenswrapper[19715]: I0313 12:52:19.400891 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} err="failed to get container status \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" Mar 13 12:52:19.401021 master-0 kubenswrapper[19715]: I0313 12:52:19.400928 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.401901 master-0 kubenswrapper[19715]: E0313 12:52:19.401867 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.401967 master-0 kubenswrapper[19715]: I0313 12:52:19.401897 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} err="failed to get container status \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" Mar 13 12:52:19.401967 master-0 kubenswrapper[19715]: I0313 12:52:19.401941 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.402379 master-0 kubenswrapper[19715]: E0313 12:52:19.402348 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.402430 master-0 kubenswrapper[19715]: I0313 12:52:19.402373 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} err="failed to get container status \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" Mar 13 12:52:19.402430 master-0 kubenswrapper[19715]: I0313 12:52:19.402389 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.403391 master-0 kubenswrapper[19715]: E0313 12:52:19.403361 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.403391 master-0 kubenswrapper[19715]: I0313 12:52:19.403384 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} err="failed to get container status \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" Mar 13 12:52:19.403546 master-0 kubenswrapper[19715]: I0313 12:52:19.403398 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.403664 master-0 kubenswrapper[19715]: E0313 12:52:19.403638 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.403664 master-0 kubenswrapper[19715]: I0313 12:52:19.403658 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} err="failed to get container status \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" Mar 13 12:52:19.403824 master-0 kubenswrapper[19715]: I0313 12:52:19.403672 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.404315 master-0 kubenswrapper[19715]: I0313 12:52:19.404051 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} err="failed to get container status \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" Mar 13 12:52:19.404315 master-0 kubenswrapper[19715]: I0313 12:52:19.404110 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.404847 master-0 kubenswrapper[19715]: I0313 12:52:19.404559 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} err="failed to get container status \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" Mar 13 12:52:19.404847 master-0 kubenswrapper[19715]: I0313 12:52:19.404694 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.405108 master-0 kubenswrapper[19715]: I0313 12:52:19.405076 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} err="failed to get container status \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" Mar 13 12:52:19.405108 master-0 kubenswrapper[19715]: I0313 12:52:19.405104 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.405705 master-0 kubenswrapper[19715]: I0313 12:52:19.405372 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} err="failed to get container status \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" Mar 13 12:52:19.405705 master-0 kubenswrapper[19715]: I0313 12:52:19.405406 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.406191 master-0 kubenswrapper[19715]: I0313 12:52:19.406005 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} err="failed to get container status \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" Mar 13 12:52:19.406191 master-0 kubenswrapper[19715]: I0313 12:52:19.406029 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.406586 master-0 kubenswrapper[19715]: I0313 12:52:19.406543 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} err="failed to get container status \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" Mar 13 12:52:19.406586 master-0 kubenswrapper[19715]: I0313 12:52:19.406570 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.406914 master-0 kubenswrapper[19715]: I0313 12:52:19.406887 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} err="failed to get container status \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" Mar 13 12:52:19.406973 master-0 kubenswrapper[19715]: I0313 12:52:19.406910 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.408481 master-0 kubenswrapper[19715]: I0313 12:52:19.408271 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} err="failed to get container status \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" Mar 13 12:52:19.408481 master-0 kubenswrapper[19715]: I0313 12:52:19.408328 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.408766 master-0 kubenswrapper[19715]: I0313 12:52:19.408737 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} err="failed to get container status \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" Mar 13 12:52:19.408766 master-0 kubenswrapper[19715]: I0313 12:52:19.408765 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.409220 master-0 kubenswrapper[19715]: I0313 12:52:19.409132 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} err="failed to get container status \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" Mar 13 12:52:19.409220 master-0 kubenswrapper[19715]: I0313 12:52:19.409158 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.409655 master-0 kubenswrapper[19715]: I0313 12:52:19.409510 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} err="failed to get container status \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" Mar 13 12:52:19.409655 master-0 kubenswrapper[19715]: I0313 12:52:19.409537 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.410020 master-0 kubenswrapper[19715]: I0313 12:52:19.409811 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} err="failed to get container status \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" Mar 13 12:52:19.410020 master-0 kubenswrapper[19715]: I0313 12:52:19.409835 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.410305 master-0 kubenswrapper[19715]: I0313 12:52:19.410099 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} err="failed to get container status \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" Mar 13 12:52:19.410305 master-0 kubenswrapper[19715]: I0313 12:52:19.410116 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.410387 master-0 kubenswrapper[19715]: I0313 12:52:19.410351 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} err="failed to get container status \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" Mar 13 12:52:19.410434 master-0 kubenswrapper[19715]: I0313 12:52:19.410397 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.410800 master-0 kubenswrapper[19715]: I0313 12:52:19.410767 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} err="failed to get container status \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" Mar 13 12:52:19.410864 master-0 kubenswrapper[19715]: I0313 12:52:19.410799 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.411184 master-0 kubenswrapper[19715]: I0313 12:52:19.411112 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} err="failed to get container status \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" Mar 13 12:52:19.411184 master-0 kubenswrapper[19715]: I0313 12:52:19.411138 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.411638 master-0 kubenswrapper[19715]: I0313 12:52:19.411449 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} err="failed to get container status \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" Mar 13 12:52:19.411638 master-0 kubenswrapper[19715]: I0313 12:52:19.411469 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.411756 master-0 kubenswrapper[19715]: I0313 12:52:19.411718 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} err="failed to get container status \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" Mar 13 12:52:19.411756 master-0 kubenswrapper[19715]: I0313 12:52:19.411742 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.413592 master-0 kubenswrapper[19715]: I0313 12:52:19.413460 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} err="failed to get container status \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" Mar 13 12:52:19.413592 master-0 kubenswrapper[19715]: I0313 12:52:19.413501 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.414006 master-0 kubenswrapper[19715]: I0313 12:52:19.413911 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} err="failed to get container status \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" Mar 13 12:52:19.414006 master-0 kubenswrapper[19715]: I0313 12:52:19.413935 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.414372 master-0 kubenswrapper[19715]: I0313 12:52:19.414296 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} err="failed to get container status \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" Mar 13 12:52:19.414372 master-0 kubenswrapper[19715]: I0313 12:52:19.414316 19715 scope.go:117] "RemoveContainer" containerID="9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff" Mar 13 12:52:19.414708 master-0 kubenswrapper[19715]: I0313 12:52:19.414678 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff"} err="failed to get container status \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": rpc error: code = NotFound desc = could not find container \"9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff\": container with ID starting with 9de6b95277c40f76dd3a072e5c0f892ac1085f25b4909ada1caa40c116ec8fff not found: ID does not exist" Mar 13 12:52:19.414708 master-0 kubenswrapper[19715]: I0313 12:52:19.414705 19715 scope.go:117] "RemoveContainer" containerID="3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d" Mar 13 12:52:19.415099 master-0 kubenswrapper[19715]: I0313 12:52:19.414980 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d"} err="failed to get container status \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": rpc error: code = NotFound desc = could not find container \"3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d\": container with ID starting with 3e5c327e8ff872c16768aced4daaf04a357339f5e30ee9011d77fec95a761e4d not found: ID does not exist" Mar 13 12:52:19.415099 master-0 kubenswrapper[19715]: I0313 12:52:19.415017 19715 scope.go:117] "RemoveContainer" containerID="6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a" Mar 13 12:52:19.415347 master-0 kubenswrapper[19715]: I0313 12:52:19.415315 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a"} err="failed to get container status \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": rpc error: code = NotFound desc = could not find container \"6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a\": container with ID starting with 6352c80bab7f955cabd1759f9ca4c9b26c88564f5ed2388d1081b5fe5fd5f51a not found: ID does not exist" Mar 13 12:52:19.415347 master-0 kubenswrapper[19715]: I0313 12:52:19.415342 19715 scope.go:117] "RemoveContainer" containerID="c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771" Mar 13 12:52:19.417972 master-0 kubenswrapper[19715]: I0313 12:52:19.417894 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771"} err="failed to get container status \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": rpc error: code = NotFound desc = could not find container \"c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771\": container with ID starting with c79293f794ec7de9ee69dc1579a9aa64ae369e69de3571c0a9911b085e5e7771 not found: ID does not exist" Mar 13 12:52:19.418073 master-0 kubenswrapper[19715]: I0313 12:52:19.417972 19715 scope.go:117] "RemoveContainer" containerID="d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4" Mar 13 12:52:19.418609 master-0 kubenswrapper[19715]: I0313 12:52:19.418547 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4"} err="failed to get container status \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": rpc error: code = NotFound desc = could not find container \"d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4\": container with ID starting with d121c58146ba7d6c223eb21164fff1f92eb5de44a61a57d521c87b26695d06b4 not found: ID does not exist" Mar 13 12:52:19.418695 master-0 kubenswrapper[19715]: I0313 12:52:19.418611 19715 scope.go:117] "RemoveContainer" containerID="3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e" Mar 13 12:52:19.419487 master-0 kubenswrapper[19715]: I0313 12:52:19.419337 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e"} err="failed to get container status \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": rpc error: code = NotFound desc = could not find container \"3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e\": container with ID starting with 3d5bbac85b02bd74044d90de0a0213aef8afe8ab5df9ada327ab6c09c215cb5e not found: ID does not exist" Mar 13 12:52:19.419487 master-0 kubenswrapper[19715]: I0313 12:52:19.419374 19715 scope.go:117] "RemoveContainer" containerID="70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d" Mar 13 12:52:19.421351 master-0 kubenswrapper[19715]: I0313 12:52:19.419748 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d"} err="failed to get container status \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": rpc error: code = NotFound desc = could not find container \"70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d\": container with ID starting with 70b3d4db810faebf3250515b302840666aa8ec49189bed17e5256d44fa36a20d not found: ID does not exist" Mar 13 12:52:19.679840 master-0 kubenswrapper[19715]: E0313 12:52:19.679770 19715 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:52:19.679840 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f" Netns:"/var/run/netns/d0d39a12-1d24-471e-b19e-f563458c9989" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.679840 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.679840 master-0 kubenswrapper[19715]: > Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: E0313 12:52:19.679887 19715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f" Netns:"/var/run/netns/d0d39a12-1d24-471e-b19e-f563458c9989" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: > pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: E0313 12:52:19.679926 19715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f" Netns:"/var/run/netns/d0d39a12-1d24-471e-b19e-f563458c9989" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: > pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:19.680123 master-0 kubenswrapper[19715]: E0313 12:52:19.680013 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-b649d7df7-lm9xz_openshift-console(3d6f2f8a-af35-43a1-8baf-fe3e731acba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-b649d7df7-lm9xz_openshift-console(3d6f2f8a-af35-43a1-8baf-fe3e731acba1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f\\\" Netns:\\\"/var/run/netns/d0d39a12-1d24-471e-b19e-f563458c9989\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=4a2e26f94573b11390f87b0f60ece61455bf83f57cd06ecaedf32eb93ce7f74f;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Mar 13 12:52:19.716524 master-0 kubenswrapper[19715]: I0313 12:52:19.715809 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" path="/var/lib/kubelet/pods/cdcecc61ff5eeb08bd2a3ac12599e4f9/volumes" Mar 13 12:52:19.779270 master-0 kubenswrapper[19715]: E0313 12:52:19.779203 19715 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:52:19.779270 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b" Netns:"/var/run/netns/a01d7d68-235f-4c9e-828a-faf79a5c319f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.779270 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.779270 master-0 kubenswrapper[19715]: > Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: E0313 12:52:19.779293 19715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b" Netns:"/var/run/netns/a01d7d68-235f-4c9e-828a-faf79a5c319f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: > pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: E0313 12:52:19.779376 19715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b" Netns:"/var/run/netns/a01d7d68-235f-4c9e-828a-faf79a5c319f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: > pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:19.780157 master-0 kubenswrapper[19715]: E0313 12:52:19.779470 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-6-master-0_openshift-kube-scheduler(b5f67c2e-1d8e-4315-bef7-c8015516cae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-6-master-0_openshift-kube-scheduler(b5f67c2e-1d8e-4315-bef7-c8015516cae0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b\\\" Netns:\\\"/var/run/netns/a01d7d68-235f-4c9e-828a-faf79a5c319f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=b41f4593837a498be860c8c6ea431487b32bae6488a6acd958bc84b9603d4e3b;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-6-master-0" podUID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" Mar 13 12:52:20.240634 master-0 kubenswrapper[19715]: I0313 12:52:20.240350 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:20.240634 master-0 kubenswrapper[19715]: I0313 12:52:20.240419 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:20.241006 master-0 kubenswrapper[19715]: I0313 12:52:20.240858 19715 patch_prober.go:28] interesting pod/downloads-84f57b9877-nz574 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" start-of-body= Mar 13 12:52:20.241006 master-0 kubenswrapper[19715]: I0313 12:52:20.240912 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-nz574" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" Mar 13 12:52:20.241304 master-0 kubenswrapper[19715]: I0313 12:52:20.241273 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:20.241565 master-0 kubenswrapper[19715]: I0313 12:52:20.241273 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:20.554334 master-0 kubenswrapper[19715]: I0313 12:52:20.554118 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:52:20.555147 master-0 kubenswrapper[19715]: I0313 12:52:20.555098 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:20.555836 master-0 kubenswrapper[19715]: I0313 12:52:20.555746 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:20.643525 master-0 kubenswrapper[19715]: I0313 12:52:20.643159 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:20.644133 master-0 kubenswrapper[19715]: E0313 12:52:20.644098 19715 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-5-master-0: failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:20.644183 master-0 kubenswrapper[19715]: E0313 12:52:20.644159 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access podName:2a0e239c-fe39-43af-8b0a-2964897d8b92 nodeName:}" failed. No retries permitted until 2026-03-13 12:52:28.644142762 +0000 UTC m=+175.210815519 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access") pod "installer-5-master-0" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92") : failed to fetch token: Post "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:20.744621 master-0 kubenswrapper[19715]: I0313 12:52:20.744364 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access\") pod \"cc6e9ceb-c6bf-409f-b515-b441a94db482\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " Mar 13 12:52:20.744621 master-0 kubenswrapper[19715]: I0313 12:52:20.744556 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir\") pod \"cc6e9ceb-c6bf-409f-b515-b441a94db482\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " Mar 13 12:52:20.744621 master-0 kubenswrapper[19715]: I0313 12:52:20.744603 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock\") pod \"cc6e9ceb-c6bf-409f-b515-b441a94db482\" (UID: \"cc6e9ceb-c6bf-409f-b515-b441a94db482\") " Mar 13 12:52:20.748553 master-0 kubenswrapper[19715]: I0313 12:52:20.745252 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "cc6e9ceb-c6bf-409f-b515-b441a94db482" (UID: "cc6e9ceb-c6bf-409f-b515-b441a94db482"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:20.748553 master-0 kubenswrapper[19715]: I0313 12:52:20.745415 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock" (OuterVolumeSpecName: "var-lock") pod "cc6e9ceb-c6bf-409f-b515-b441a94db482" (UID: "cc6e9ceb-c6bf-409f-b515-b441a94db482"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:52:20.751966 master-0 kubenswrapper[19715]: I0313 12:52:20.751907 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "cc6e9ceb-c6bf-409f-b515-b441a94db482" (UID: "cc6e9ceb-c6bf-409f-b515-b441a94db482"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:52:20.848062 master-0 kubenswrapper[19715]: I0313 12:52:20.846871 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/cc6e9ceb-c6bf-409f-b515-b441a94db482-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:20.848062 master-0 kubenswrapper[19715]: I0313 12:52:20.846927 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:20.848062 master-0 kubenswrapper[19715]: I0313 12:52:20.846939 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc6e9ceb-c6bf-409f-b515-b441a94db482-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:52:20.991488 master-0 kubenswrapper[19715]: I0313 12:52:20.991389 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:52:21.040662 master-0 kubenswrapper[19715]: I0313 12:52:21.040615 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:52:21.042404 master-0 kubenswrapper[19715]: I0313 12:52:21.042356 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.043055 master-0 kubenswrapper[19715]: I0313 12:52:21.043003 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.047712 master-0 kubenswrapper[19715]: I0313 12:52:21.043650 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.132069 master-0 kubenswrapper[19715]: E0313 12:52:21.131964 19715 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:52:21.132069 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d" Netns:"/var/run/netns/c89128a9-d719-4d45-b8e8-3d6c26df2085" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.132069 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.132069 master-0 kubenswrapper[19715]: > Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: E0313 12:52:21.132090 19715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d" Netns:"/var/run/netns/c89128a9-d719-4d45-b8e8-3d6c26df2085" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: > pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: E0313 12:52:21.132132 19715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d" Netns:"/var/run/netns/c89128a9-d719-4d45-b8e8-3d6c26df2085" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: > pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:21.132317 master-0 kubenswrapper[19715]: E0313 12:52:21.132241 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-6-master-0_openshift-kube-scheduler(b5f67c2e-1d8e-4315-bef7-c8015516cae0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-6-master-0_openshift-kube-scheduler(b5f67c2e-1d8e-4315-bef7-c8015516cae0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-6-master-0_openshift-kube-scheduler_b5f67c2e-1d8e-4315-bef7-c8015516cae0_0(9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d): error adding pod openshift-kube-scheduler_installer-6-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d\\\" Netns:\\\"/var/run/netns/c89128a9-d719-4d45-b8e8-3d6c26df2085\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-6-master-0;K8S_POD_INFRA_CONTAINER_ID=9cdd38b784714a79629b3190a02d7e1588a9bf200cb24cde8c6967156d017e1d;K8S_POD_UID=b5f67c2e-1d8e-4315-bef7-c8015516cae0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-6-master-0] networking: Multus: [openshift-kube-scheduler/installer-6-master-0/b5f67c2e-1d8e-4315-bef7-c8015516cae0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-6-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-6-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-6-master-0?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-6-master-0" podUID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" Mar 13 12:52:21.227802 master-0 kubenswrapper[19715]: E0313 12:52:21.227730 19715 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 12:52:21.227802 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990" Netns:"/var/run/netns/5c739997-ea9e-43de-bd1a-0d5b312b9db6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.227802 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.227802 master-0 kubenswrapper[19715]: > Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: E0313 12:52:21.227886 19715 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990" Netns:"/var/run/netns/5c739997-ea9e-43de-bd1a-0d5b312b9db6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: > pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: E0313 12:52:21.227943 19715 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990" Netns:"/var/run/netns/5c739997-ea9e-43de-bd1a-0d5b312b9db6" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Path:"" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 12:52:21.227990 master-0 kubenswrapper[19715]: > pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:21.228221 master-0 kubenswrapper[19715]: E0313 12:52:21.228029 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"console-b649d7df7-lm9xz_openshift-console(3d6f2f8a-af35-43a1-8baf-fe3e731acba1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"console-b649d7df7-lm9xz_openshift-console(3d6f2f8a-af35-43a1-8baf-fe3e731acba1)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_console-b649d7df7-lm9xz_openshift-console_3d6f2f8a-af35-43a1-8baf-fe3e731acba1_0(60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990): error adding pod openshift-console_console-b649d7df7-lm9xz to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990\\\" Netns:\\\"/var/run/netns/5c739997-ea9e-43de-bd1a-0d5b312b9db6\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-console;K8S_POD_NAME=console-b649d7df7-lm9xz;K8S_POD_INFRA_CONTAINER_ID=60c9011b089f1e562bca742268d97e60cee9f655fc3ce4b9f5a53c3c12970990;K8S_POD_UID=3d6f2f8a-af35-43a1-8baf-fe3e731acba1\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-console/console-b649d7df7-lm9xz] networking: Multus: [openshift-console/console-b649d7df7-lm9xz/3d6f2f8a-af35-43a1-8baf-fe3e731acba1]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: SetNetworkStatus: failed to update the pod console-b649d7df7-lm9xz in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/console-b649d7df7-lm9xz?timeout=1m0s\\\": dial tcp 192.168.32.10:6443: connect: connection refused\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" Mar 13 12:52:21.262356 master-0 kubenswrapper[19715]: I0313 12:52:21.262292 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 12:52:21.262356 master-0 kubenswrapper[19715]: I0313 12:52:21.262292 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"cc6e9ceb-c6bf-409f-b515-b441a94db482","Type":"ContainerDied","Data":"9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b"} Mar 13 12:52:21.262356 master-0 kubenswrapper[19715]: I0313 12:52:21.262362 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e28b71a728c0e0742441c66e1b146a4b4ac35057853f94d7afcc53f16ebba6b" Mar 13 12:52:21.265830 master-0 kubenswrapper[19715]: I0313 12:52:21.265777 19715 patch_prober.go:28] interesting pod/downloads-84f57b9877-nz574 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" start-of-body= Mar 13 12:52:21.265830 master-0 kubenswrapper[19715]: I0313 12:52:21.265823 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-nz574" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" Mar 13 12:52:21.290005 master-0 kubenswrapper[19715]: I0313 12:52:21.289883 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.291610 master-0 kubenswrapper[19715]: I0313 12:52:21.290962 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.291610 master-0 kubenswrapper[19715]: I0313 12:52:21.291586 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.293503 master-0 kubenswrapper[19715]: I0313 12:52:21.293480 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:52:21.293993 master-0 kubenswrapper[19715]: I0313 12:52:21.293963 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.294794 master-0 kubenswrapper[19715]: I0313 12:52:21.294737 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.295298 master-0 kubenswrapper[19715]: I0313 12:52:21.295270 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:21.995707 master-0 kubenswrapper[19715]: E0313 12:52:21.995540 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{installer-5-master-0.189c67a9f404346b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:installer-5-master-0,UID:2a0e239c-fe39-43af-8b0a-2964897d8b92,APIVersion:v1,ResourceVersion:14739,FieldPath:,},Reason:FailedMount,Message:MountVolume.SetUp failed for volume \"kube-api-access\" : failed to fetch token: Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/serviceaccounts/installer-sa/token\": dial tcp 192.168.32.10:6443: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:52:12.918183019 +0000 UTC m=+159.484855776,LastTimestamp:2026-03-13 12:52:12.918183019 +0000 UTC m=+159.484855776,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:52:22.792262 master-0 kubenswrapper[19715]: I0313 12:52:22.792151 19715 patch_prober.go:28] interesting pod/downloads-84f57b9877-nz574 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" start-of-body= Mar 13 12:52:22.792262 master-0 kubenswrapper[19715]: I0313 12:52:22.792254 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-84f57b9877-nz574" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" Mar 13 12:52:22.792709 master-0 kubenswrapper[19715]: I0313 12:52:22.792300 19715 patch_prober.go:28] interesting pod/downloads-84f57b9877-nz574 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" start-of-body= Mar 13 12:52:22.792709 master-0 kubenswrapper[19715]: I0313 12:52:22.792347 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-nz574" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.91:8080/\": dial tcp 10.128.0.91:8080: connect: connection refused" Mar 13 12:52:22.888838 master-0 kubenswrapper[19715]: E0313 12:52:22.888773 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:22.889271 master-0 kubenswrapper[19715]: E0313 12:52:22.889239 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:22.889689 master-0 kubenswrapper[19715]: E0313 12:52:22.889656 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:22.890284 master-0 kubenswrapper[19715]: E0313 12:52:22.890251 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:22.890867 master-0 kubenswrapper[19715]: E0313 12:52:22.890806 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:22.890916 master-0 kubenswrapper[19715]: I0313 12:52:22.890885 19715 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:52:22.891639 master-0 kubenswrapper[19715]: E0313 12:52:22.891569 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:52:23.096162 master-0 kubenswrapper[19715]: E0313 12:52:23.096018 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:52:23.498338 master-0 kubenswrapper[19715]: E0313 12:52:23.497875 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:52:23.701913 master-0 kubenswrapper[19715]: I0313 12:52:23.701829 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:23.702715 master-0 kubenswrapper[19715]: I0313 12:52:23.702646 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:23.703596 master-0 kubenswrapper[19715]: I0313 12:52:23.703515 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.137554 master-0 kubenswrapper[19715]: E0313 12:52:24.137484 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:24Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:24Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:24Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T12:52:24Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.138806 master-0 kubenswrapper[19715]: E0313 12:52:24.138762 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.139356 master-0 kubenswrapper[19715]: E0313 12:52:24.139321 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.139882 master-0 kubenswrapper[19715]: E0313 12:52:24.139851 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.140279 master-0 kubenswrapper[19715]: E0313 12:52:24.140249 19715 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:24.140279 master-0 kubenswrapper[19715]: E0313 12:52:24.140269 19715 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 12:52:24.299993 master-0 kubenswrapper[19715]: E0313 12:52:24.299902 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:52:25.695740 master-0 kubenswrapper[19715]: I0313 12:52:25.695678 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:25.697452 master-0 kubenswrapper[19715]: I0313 12:52:25.697393 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:25.698947 master-0 kubenswrapper[19715]: I0313 12:52:25.698899 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:25.701082 master-0 kubenswrapper[19715]: I0313 12:52:25.701030 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:25.715155 master-0 kubenswrapper[19715]: I0313 12:52:25.715084 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:25.715155 master-0 kubenswrapper[19715]: I0313 12:52:25.715153 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:25.716244 master-0 kubenswrapper[19715]: E0313 12:52:25.716174 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:25.716991 master-0 kubenswrapper[19715]: I0313 12:52:25.716956 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:25.742932 master-0 kubenswrapper[19715]: W0313 12:52:25.742878 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4c3280e9367536f782caf8bdc07edb85.slice/crio-755825e8c663b83be9da742e007ed43949b822c0b68fbbb11056fb18a2d6516c WatchSource:0}: Error finding container 755825e8c663b83be9da742e007ed43949b822c0b68fbbb11056fb18a2d6516c: Status 404 returned error can't find the container with id 755825e8c663b83be9da742e007ed43949b822c0b68fbbb11056fb18a2d6516c Mar 13 12:52:25.902007 master-0 kubenswrapper[19715]: E0313 12:52:25.901935 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:52:26.337171 master-0 kubenswrapper[19715]: I0313 12:52:26.337066 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac"} Mar 13 12:52:26.337171 master-0 kubenswrapper[19715]: I0313 12:52:26.337170 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"755825e8c663b83be9da742e007ed43949b822c0b68fbbb11056fb18a2d6516c"} Mar 13 12:52:27.346024 master-0 kubenswrapper[19715]: I0313 12:52:27.345940 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac" exitCode=0 Mar 13 12:52:27.346024 master-0 kubenswrapper[19715]: I0313 12:52:27.346007 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerDied","Data":"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac"} Mar 13 12:52:27.346775 master-0 kubenswrapper[19715]: I0313 12:52:27.346261 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:27.346775 master-0 kubenswrapper[19715]: I0313 12:52:27.346282 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:27.347112 master-0 kubenswrapper[19715]: I0313 12:52:27.347067 19715 status_manager.go:851] "Failed to get status for pod" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" pod="openshift-monitoring/prometheus-k8s-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:27.347317 master-0 kubenswrapper[19715]: E0313 12:52:27.347247 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:27.347694 master-0 kubenswrapper[19715]: I0313 12:52:27.347622 19715 status_manager.go:851] "Failed to get status for pod" podUID="a64d9c42-4a0b-472a-955a-4edab6b33210" pod="openshift-console/downloads-84f57b9877-nz574" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-console/pods/downloads-84f57b9877-nz574\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:27.348227 master-0 kubenswrapper[19715]: I0313 12:52:27.348174 19715 status_manager.go:851] "Failed to get status for pod" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" pod="openshift-kube-apiserver/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:52:28.362455 master-0 kubenswrapper[19715]: I0313 12:52:28.362381 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d"} Mar 13 12:52:28.366957 master-0 kubenswrapper[19715]: I0313 12:52:28.366895 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager/0.log" Mar 13 12:52:28.366957 master-0 kubenswrapper[19715]: I0313 12:52:28.366961 19715 generic.go:334] "Generic (PLEG): container finished" podID="e95e98146cf857064826636918715dbe" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" exitCode=1 Mar 13 12:52:28.367244 master-0 kubenswrapper[19715]: I0313 12:52:28.366995 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerDied","Data":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} Mar 13 12:52:28.367669 master-0 kubenswrapper[19715]: I0313 12:52:28.367627 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:52:28.699126 master-0 kubenswrapper[19715]: I0313 12:52:28.699069 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:29.377187 master-0 kubenswrapper[19715]: I0313 12:52:29.377135 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager/0.log" Mar 13 12:52:29.377807 master-0 kubenswrapper[19715]: I0313 12:52:29.377233 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"e95e98146cf857064826636918715dbe","Type":"ContainerStarted","Data":"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a"} Mar 13 12:52:29.380995 master-0 kubenswrapper[19715]: I0313 12:52:29.380957 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8"} Mar 13 12:52:29.381112 master-0 kubenswrapper[19715]: I0313 12:52:29.381001 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b"} Mar 13 12:52:29.381112 master-0 kubenswrapper[19715]: I0313 12:52:29.381014 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda"} Mar 13 12:52:30.368372 master-0 kubenswrapper[19715]: I0313 12:52:30.368257 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:30.368372 master-0 kubenswrapper[19715]: I0313 12:52:30.368349 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:30.375826 master-0 kubenswrapper[19715]: I0313 12:52:30.375761 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:30.392259 master-0 kubenswrapper[19715]: I0313 12:52:30.392169 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"4c3280e9367536f782caf8bdc07edb85","Type":"ContainerStarted","Data":"804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7"} Mar 13 12:52:30.392965 master-0 kubenswrapper[19715]: I0313 12:52:30.392602 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:30.392965 master-0 kubenswrapper[19715]: I0313 12:52:30.392640 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:30.718168 master-0 kubenswrapper[19715]: I0313 12:52:30.717898 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:30.718168 master-0 kubenswrapper[19715]: I0313 12:52:30.718164 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: I0313 12:52:30.745813 19715 patch_prober.go:28] interesting pod/kube-apiserver-master-0 container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]log ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]etcd ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/openshift.io-api-request-count-filter ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/openshift.io-startkubeinformers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/priority-and-fairness-config-consumer ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/priority-and-fairness-filter ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-apiextensions-informers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-apiextensions-controllers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/crd-informer-synced ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-system-namespaces-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-cluster-authentication-info-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-legacy-token-tracking-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-service-ip-repair-controllers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/scheduling/bootstrap-system-priority-classes ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/priority-and-fairness-config-producer ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/bootstrap-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/start-kube-aggregator-informers ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-status-local-available-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-status-remote-available-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-registration-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-wait-for-first-sync ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-discovery-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/kube-apiserver-autoregistration ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]autoregister-completion ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-openapi-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: [+]poststarthook/apiservice-openapiv3-controller ok Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: livez check failed Mar 13 12:52:30.746178 master-0 kubenswrapper[19715]: I0313 12:52:30.745886 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 12:52:32.722157 master-0 kubenswrapper[19715]: I0313 12:52:32.722037 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:32.723321 master-0 kubenswrapper[19715]: I0313 12:52:32.722996 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:32.797605 master-0 kubenswrapper[19715]: I0313 12:52:32.797510 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-nz574" Mar 13 12:52:33.166186 master-0 kubenswrapper[19715]: W0313 12:52:33.165526 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3d6f2f8a_af35_43a1_8baf_fe3e731acba1.slice/crio-268fa8b24649b2535d636e5afbb81d5567d65d515fff43c2b7874859144ab4a1 WatchSource:0}: Error finding container 268fa8b24649b2535d636e5afbb81d5567d65d515fff43c2b7874859144ab4a1: Status 404 returned error can't find the container with id 268fa8b24649b2535d636e5afbb81d5567d65d515fff43c2b7874859144ab4a1 Mar 13 12:52:33.417526 master-0 kubenswrapper[19715]: I0313 12:52:33.417110 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b649d7df7-lm9xz" event={"ID":"3d6f2f8a-af35-43a1-8baf-fe3e731acba1","Type":"ContainerStarted","Data":"268fa8b24649b2535d636e5afbb81d5567d65d515fff43c2b7874859144ab4a1"} Mar 13 12:52:34.605608 master-0 kubenswrapper[19715]: I0313 12:52:34.605542 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"installer-5-master-0\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:34.673273 master-0 kubenswrapper[19715]: I0313 12:52:34.673208 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-jdg75" Mar 13 12:52:34.680990 master-0 kubenswrapper[19715]: I0313 12:52:34.680931 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:52:34.696308 master-0 kubenswrapper[19715]: I0313 12:52:34.696256 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:34.696833 master-0 kubenswrapper[19715]: I0313 12:52:34.696809 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:52:35.468782 master-0 kubenswrapper[19715]: W0313 12:52:35.468709 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2a0e239c_fe39_43af_8b0a_2964897d8b92.slice/crio-e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02 WatchSource:0}: Error finding container e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02: Status 404 returned error can't find the container with id e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02 Mar 13 12:52:35.521292 master-0 kubenswrapper[19715]: I0313 12:52:35.520685 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:35.607446 master-0 kubenswrapper[19715]: I0313 12:52:35.607373 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="4c3280e9367536f782caf8bdc07edb85" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:52:36.446319 master-0 kubenswrapper[19715]: I0313 12:52:36.446158 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2a0e239c-fe39-43af-8b0a-2964897d8b92","Type":"ContainerStarted","Data":"eba3cbd82f9bf9c30a938f2bc8b36ef9213a9c14e54e26f316f0f6d0aad9232c"} Mar 13 12:52:36.446319 master-0 kubenswrapper[19715]: I0313 12:52:36.446243 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2a0e239c-fe39-43af-8b0a-2964897d8b92","Type":"ContainerStarted","Data":"e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02"} Mar 13 12:52:36.448518 master-0 kubenswrapper[19715]: I0313 12:52:36.448477 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:36.448518 master-0 kubenswrapper[19715]: I0313 12:52:36.448512 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:36.450208 master-0 kubenswrapper[19715]: I0313 12:52:36.450168 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"b5f67c2e-1d8e-4315-bef7-c8015516cae0","Type":"ContainerStarted","Data":"8ba50dfd5da7175c582a40fb68181b94c7be7b21b0a5b5383ecfe8762b301384"} Mar 13 12:52:36.450297 master-0 kubenswrapper[19715]: I0313 12:52:36.450214 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:36.450297 master-0 kubenswrapper[19715]: I0313 12:52:36.450250 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"b5f67c2e-1d8e-4315-bef7-c8015516cae0","Type":"ContainerStarted","Data":"d4fe0707150c31ec3ce06d06c1f4ca00d8cc66eff07e8cd7a34b7a0ce43d8df8"} Mar 13 12:52:36.471673 master-0 kubenswrapper[19715]: I0313 12:52:36.471558 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="4c3280e9367536f782caf8bdc07edb85" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:52:37.459508 master-0 kubenswrapper[19715]: I0313 12:52:37.459450 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:37.459508 master-0 kubenswrapper[19715]: I0313 12:52:37.459495 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="0714281f-7db2-47f0-bbc5-3016b4d61584" Mar 13 12:52:37.464323 master-0 kubenswrapper[19715]: I0313 12:52:37.464251 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="4c3280e9367536f782caf8bdc07edb85" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:52:39.548760 master-0 kubenswrapper[19715]: I0313 12:52:39.548647 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b649d7df7-lm9xz" event={"ID":"3d6f2f8a-af35-43a1-8baf-fe3e731acba1","Type":"ContainerStarted","Data":"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e"} Mar 13 12:52:40.371827 master-0 kubenswrapper[19715]: I0313 12:52:40.371753 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:52:40.529406 master-0 kubenswrapper[19715]: I0313 12:52:40.529322 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:40.529406 master-0 kubenswrapper[19715]: I0313 12:52:40.529385 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:52:40.531609 master-0 kubenswrapper[19715]: I0313 12:52:40.531540 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:52:40.531738 master-0 kubenswrapper[19715]: I0313 12:52:40.531629 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:52:44.453915 master-0 kubenswrapper[19715]: I0313 12:52:44.453830 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:52:44.729118 master-0 kubenswrapper[19715]: I0313 12:52:44.728956 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:52:45.131404 master-0 kubenswrapper[19715]: I0313 12:52:45.131215 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:52:45.186671 master-0 kubenswrapper[19715]: I0313 12:52:45.186550 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5s2w7" Mar 13 12:52:45.290926 master-0 kubenswrapper[19715]: I0313 12:52:45.290876 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:52:45.351641 master-0 kubenswrapper[19715]: I0313 12:52:45.351564 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:52:45.398658 master-0 kubenswrapper[19715]: I0313 12:52:45.398515 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:52:45.713418 master-0 kubenswrapper[19715]: I0313 12:52:45.713283 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:52:46.206295 master-0 kubenswrapper[19715]: I0313 12:52:46.206208 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 12:52:46.208245 master-0 kubenswrapper[19715]: I0313 12:52:46.208199 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:52:46.263181 master-0 kubenswrapper[19715]: I0313 12:52:46.263135 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:52:46.489689 master-0 kubenswrapper[19715]: I0313 12:52:46.489519 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:52:46.629069 master-0 kubenswrapper[19715]: I0313 12:52:46.628995 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:52:46.782872 master-0 kubenswrapper[19715]: I0313 12:52:46.782678 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 12:52:46.805919 master-0 kubenswrapper[19715]: I0313 12:52:46.805827 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:52:47.001277 master-0 kubenswrapper[19715]: I0313 12:52:47.001211 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 12:52:47.057198 master-0 kubenswrapper[19715]: I0313 12:52:47.056941 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-comnvpv6eh6ml" Mar 13 12:52:47.414514 master-0 kubenswrapper[19715]: I0313 12:52:47.414446 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:52:47.561174 master-0 kubenswrapper[19715]: I0313 12:52:47.561118 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:52:47.662391 master-0 kubenswrapper[19715]: I0313 12:52:47.662328 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:52:47.703014 master-0 kubenswrapper[19715]: I0313 12:52:47.701426 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:52:47.774327 master-0 kubenswrapper[19715]: I0313 12:52:47.774272 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:52:47.819546 master-0 kubenswrapper[19715]: I0313 12:52:47.819496 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:52:47.955847 master-0 kubenswrapper[19715]: I0313 12:52:47.955669 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:52:48.082363 master-0 kubenswrapper[19715]: I0313 12:52:48.082273 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:52:48.084046 master-0 kubenswrapper[19715]: I0313 12:52:48.083994 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:52:48.149681 master-0 kubenswrapper[19715]: I0313 12:52:48.149567 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:52:48.249016 master-0 kubenswrapper[19715]: I0313 12:52:48.248778 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:52:48.259902 master-0 kubenswrapper[19715]: I0313 12:52:48.259857 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-mr4r4" Mar 13 12:52:48.328244 master-0 kubenswrapper[19715]: I0313 12:52:48.328169 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:52:48.425297 master-0 kubenswrapper[19715]: I0313 12:52:48.425201 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-l78bb" Mar 13 12:52:48.445408 master-0 kubenswrapper[19715]: I0313 12:52:48.445250 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:52:48.484603 master-0 kubenswrapper[19715]: I0313 12:52:48.484536 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-7t467" Mar 13 12:52:48.525555 master-0 kubenswrapper[19715]: I0313 12:52:48.525404 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:52:48.654025 master-0 kubenswrapper[19715]: I0313 12:52:48.653972 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xv4qd" Mar 13 12:52:48.776790 master-0 kubenswrapper[19715]: I0313 12:52:48.776614 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9n5pq" Mar 13 12:52:48.808385 master-0 kubenswrapper[19715]: I0313 12:52:48.808315 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:52:48.831567 master-0 kubenswrapper[19715]: I0313 12:52:48.828145 19715 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:52:48.842415 master-0 kubenswrapper[19715]: I0313 12:52:48.842372 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:52:48.848807 master-0 kubenswrapper[19715]: I0313 12:52:48.848757 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:52:48.917615 master-0 kubenswrapper[19715]: I0313 12:52:48.917533 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 12:52:48.991394 master-0 kubenswrapper[19715]: I0313 12:52:48.991330 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:52:49.038327 master-0 kubenswrapper[19715]: I0313 12:52:49.038170 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:52:49.127707 master-0 kubenswrapper[19715]: I0313 12:52:49.127660 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:52:49.136548 master-0 kubenswrapper[19715]: I0313 12:52:49.136432 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:52:49.144025 master-0 kubenswrapper[19715]: I0313 12:52:49.143922 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:52:49.168326 master-0 kubenswrapper[19715]: I0313 12:52:49.168274 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:52:49.188993 master-0 kubenswrapper[19715]: I0313 12:52:49.188896 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:52:49.312950 master-0 kubenswrapper[19715]: I0313 12:52:49.312813 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:52:49.342080 master-0 kubenswrapper[19715]: I0313 12:52:49.342017 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:52:49.387430 master-0 kubenswrapper[19715]: I0313 12:52:49.387380 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-p8xg8" Mar 13 12:52:49.525097 master-0 kubenswrapper[19715]: I0313 12:52:49.525040 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:52:49.603130 master-0 kubenswrapper[19715]: I0313 12:52:49.601964 19715 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:52:49.603130 master-0 kubenswrapper[19715]: I0313 12:52:49.602929 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-5-master-0" podStartSLOduration=37.602886716 podStartE2EDuration="37.602886716s" podCreationTimestamp="2026-03-13 12:52:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:36.467884569 +0000 UTC m=+183.034557326" watchObservedRunningTime="2026-03-13 12:52:49.602886716 +0000 UTC m=+196.169559473" Mar 13 12:52:49.603885 master-0 kubenswrapper[19715]: I0313 12:52:49.603825 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:52:49.606208 master-0 kubenswrapper[19715]: I0313 12:52:49.606149 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-b649d7df7-lm9xz" podStartSLOduration=44.378742977 podStartE2EDuration="49.606137078s" podCreationTimestamp="2026-03-13 12:52:00 +0000 UTC" firstStartedPulling="2026-03-13 12:52:33.168333293 +0000 UTC m=+179.735006050" lastFinishedPulling="2026-03-13 12:52:38.395727384 +0000 UTC m=+184.962400151" observedRunningTime="2026-03-13 12:52:39.567163129 +0000 UTC m=+186.133835896" watchObservedRunningTime="2026-03-13 12:52:49.606137078 +0000 UTC m=+196.172809835" Mar 13 12:52:49.607023 master-0 kubenswrapper[19715]: I0313 12:52:49.606967 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-nz574" podStartSLOduration=32.162481023 podStartE2EDuration="1m17.606955373s" podCreationTimestamp="2026-03-13 12:51:32 +0000 UTC" firstStartedPulling="2026-03-13 12:51:33.473617593 +0000 UTC m=+120.040290350" lastFinishedPulling="2026-03-13 12:52:18.918091943 +0000 UTC m=+165.484764700" observedRunningTime="2026-03-13 12:52:35.452818175 +0000 UTC m=+182.019490942" watchObservedRunningTime="2026-03-13 12:52:49.606955373 +0000 UTC m=+196.173628150" Mar 13 12:52:49.608321 master-0 kubenswrapper[19715]: I0313 12:52:49.608252 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-cjs56" Mar 13 12:52:49.609333 master-0 kubenswrapper[19715]: I0313 12:52:49.608670 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-6-master-0" podStartSLOduration=40.608657407 podStartE2EDuration="40.608657407s" podCreationTimestamp="2026-03-13 12:52:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:36.489794487 +0000 UTC m=+183.056467264" watchObservedRunningTime="2026-03-13 12:52:49.608657407 +0000 UTC m=+196.175330174" Mar 13 12:52:49.609717 master-0 kubenswrapper[19715]: I0313 12:52:49.609673 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:52:49.609798 master-0 kubenswrapper[19715]: I0313 12:52:49.609741 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:52:49.609798 master-0 kubenswrapper[19715]: I0313 12:52:49.609763 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-5-master-0","openshift-console/console-b649d7df7-lm9xz","openshift-kube-scheduler/installer-6-master-0"] Mar 13 12:52:49.636492 master-0 kubenswrapper[19715]: I0313 12:52:49.636397 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=14.636376376 podStartE2EDuration="14.636376376s" podCreationTimestamp="2026-03-13 12:52:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:52:49.631947146 +0000 UTC m=+196.198619913" watchObservedRunningTime="2026-03-13 12:52:49.636376376 +0000 UTC m=+196.203049133" Mar 13 12:52:49.756847 master-0 kubenswrapper[19715]: I0313 12:52:49.756778 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7gls2" Mar 13 12:52:49.779841 master-0 kubenswrapper[19715]: I0313 12:52:49.779784 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:52:49.822802 master-0 kubenswrapper[19715]: I0313 12:52:49.822731 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:52:49.894480 master-0 kubenswrapper[19715]: I0313 12:52:49.894410 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:52:49.920551 master-0 kubenswrapper[19715]: I0313 12:52:49.920490 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:52:49.998740 master-0 kubenswrapper[19715]: I0313 12:52:49.998665 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:52:50.013211 master-0 kubenswrapper[19715]: I0313 12:52:50.013143 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:52:50.038974 master-0 kubenswrapper[19715]: I0313 12:52:50.038897 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:52:50.185002 master-0 kubenswrapper[19715]: I0313 12:52:50.184870 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:52:50.244248 master-0 kubenswrapper[19715]: I0313 12:52:50.244168 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:52:50.271045 master-0 kubenswrapper[19715]: I0313 12:52:50.270962 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:52:50.361157 master-0 kubenswrapper[19715]: I0313 12:52:50.361015 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 12:52:50.434572 master-0 kubenswrapper[19715]: I0313 12:52:50.434500 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:52:50.459724 master-0 kubenswrapper[19715]: I0313 12:52:50.459277 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:52:50.488684 master-0 kubenswrapper[19715]: I0313 12:52:50.488616 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:52:50.489530 master-0 kubenswrapper[19715]: I0313 12:52:50.489499 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:52:50.529784 master-0 kubenswrapper[19715]: I0313 12:52:50.529733 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:52:50.530102 master-0 kubenswrapper[19715]: I0313 12:52:50.530069 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:52:50.562703 master-0 kubenswrapper[19715]: I0313 12:52:50.562648 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:52:50.685397 master-0 kubenswrapper[19715]: I0313 12:52:50.685309 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:52:50.722665 master-0 kubenswrapper[19715]: I0313 12:52:50.722538 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:50.727323 master-0 kubenswrapper[19715]: I0313 12:52:50.727269 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:50.729310 master-0 kubenswrapper[19715]: I0313 12:52:50.729269 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:52:50.776413 master-0 kubenswrapper[19715]: I0313 12:52:50.776343 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:52:50.807344 master-0 kubenswrapper[19715]: I0313 12:52:50.807270 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:52:50.866630 master-0 kubenswrapper[19715]: I0313 12:52:50.866533 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmg42" Mar 13 12:52:50.873930 master-0 kubenswrapper[19715]: I0313 12:52:50.873863 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:52:50.893002 master-0 kubenswrapper[19715]: I0313 12:52:50.892938 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:52:50.896805 master-0 kubenswrapper[19715]: I0313 12:52:50.896727 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 12:52:50.987694 master-0 kubenswrapper[19715]: I0313 12:52:50.987516 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:52:51.056942 master-0 kubenswrapper[19715]: I0313 12:52:51.056841 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jwq7f" Mar 13 12:52:51.144387 master-0 kubenswrapper[19715]: I0313 12:52:51.144328 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:52:51.162288 master-0 kubenswrapper[19715]: I0313 12:52:51.162227 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 12:52:51.209299 master-0 kubenswrapper[19715]: I0313 12:52:51.209211 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ct6jh" Mar 13 12:52:51.257182 master-0 kubenswrapper[19715]: I0313 12:52:51.257043 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:52:51.277369 master-0 kubenswrapper[19715]: I0313 12:52:51.277305 19715 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:52:51.293460 master-0 kubenswrapper[19715]: I0313 12:52:51.293366 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:52:51.301393 master-0 kubenswrapper[19715]: I0313 12:52:51.301330 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:52:51.331482 master-0 kubenswrapper[19715]: I0313 12:52:51.331409 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 12:52:51.376713 master-0 kubenswrapper[19715]: I0313 12:52:51.376647 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:52:51.403207 master-0 kubenswrapper[19715]: I0313 12:52:51.403137 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:52:51.421975 master-0 kubenswrapper[19715]: I0313 12:52:51.421896 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 12:52:51.510373 master-0 kubenswrapper[19715]: I0313 12:52:51.510224 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:52:51.512901 master-0 kubenswrapper[19715]: I0313 12:52:51.512855 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:52:51.520204 master-0 kubenswrapper[19715]: I0313 12:52:51.520138 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:52:51.541512 master-0 kubenswrapper[19715]: I0313 12:52:51.541417 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 12:52:51.551789 master-0 kubenswrapper[19715]: I0313 12:52:51.551743 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:52:51.725442 master-0 kubenswrapper[19715]: I0313 12:52:51.720470 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:52:51.760523 master-0 kubenswrapper[19715]: I0313 12:52:51.760330 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:52:51.765431 master-0 kubenswrapper[19715]: I0313 12:52:51.764610 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:52:51.802639 master-0 kubenswrapper[19715]: I0313 12:52:51.802115 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:52:51.843467 master-0 kubenswrapper[19715]: I0313 12:52:51.841153 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mjc6s" Mar 13 12:52:51.843467 master-0 kubenswrapper[19715]: I0313 12:52:51.841602 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:52:51.851208 master-0 kubenswrapper[19715]: I0313 12:52:51.849195 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:52:51.869500 master-0 kubenswrapper[19715]: I0313 12:52:51.869429 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:52:51.869880 master-0 kubenswrapper[19715]: I0313 12:52:51.869671 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-89sxl" Mar 13 12:52:51.878693 master-0 kubenswrapper[19715]: I0313 12:52:51.875446 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:52:51.893609 master-0 kubenswrapper[19715]: I0313 12:52:51.887706 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 12:52:51.920218 master-0 kubenswrapper[19715]: I0313 12:52:51.920139 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:52:51.986483 master-0 kubenswrapper[19715]: I0313 12:52:51.986338 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:52:52.039042 master-0 kubenswrapper[19715]: I0313 12:52:52.038823 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-lcdwj" Mar 13 12:52:52.082669 master-0 kubenswrapper[19715]: I0313 12:52:52.082601 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:52:52.208112 master-0 kubenswrapper[19715]: I0313 12:52:52.208042 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:52:52.213492 master-0 kubenswrapper[19715]: I0313 12:52:52.211767 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:52:52.313325 master-0 kubenswrapper[19715]: I0313 12:52:52.313146 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:52:52.405878 master-0 kubenswrapper[19715]: I0313 12:52:52.405812 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:52:52.450859 master-0 kubenswrapper[19715]: I0313 12:52:52.450764 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:52:52.459076 master-0 kubenswrapper[19715]: I0313 12:52:52.458730 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:52:52.493058 master-0 kubenswrapper[19715]: I0313 12:52:52.492987 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:52:52.650810 master-0 kubenswrapper[19715]: I0313 12:52:52.650736 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:52:52.653072 master-0 kubenswrapper[19715]: I0313 12:52:52.653026 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:52:52.687332 master-0 kubenswrapper[19715]: I0313 12:52:52.687269 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:52:52.734723 master-0 kubenswrapper[19715]: I0313 12:52:52.734624 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:52:52.809780 master-0 kubenswrapper[19715]: I0313 12:52:52.809722 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:52:52.813815 master-0 kubenswrapper[19715]: I0313 12:52:52.813749 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:52:52.886873 master-0 kubenswrapper[19715]: I0313 12:52:52.886812 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-46jst" Mar 13 12:52:52.928227 master-0 kubenswrapper[19715]: I0313 12:52:52.928068 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h5lt2" Mar 13 12:52:52.985058 master-0 kubenswrapper[19715]: I0313 12:52:52.984967 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:52:53.076911 master-0 kubenswrapper[19715]: I0313 12:52:53.076843 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:52:53.084507 master-0 kubenswrapper[19715]: I0313 12:52:53.084446 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:52:53.131021 master-0 kubenswrapper[19715]: I0313 12:52:53.130954 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:52:53.142843 master-0 kubenswrapper[19715]: I0313 12:52:53.142775 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 12:52:53.146203 master-0 kubenswrapper[19715]: I0313 12:52:53.146156 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 12:52:53.155517 master-0 kubenswrapper[19715]: I0313 12:52:53.155474 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:52:53.253595 master-0 kubenswrapper[19715]: I0313 12:52:53.253405 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:52:53.260097 master-0 kubenswrapper[19715]: I0313 12:52:53.260021 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:52:53.266444 master-0 kubenswrapper[19715]: I0313 12:52:53.266388 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 12:52:53.296737 master-0 kubenswrapper[19715]: I0313 12:52:53.296683 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:52:53.309034 master-0 kubenswrapper[19715]: I0313 12:52:53.308964 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:52:53.400206 master-0 kubenswrapper[19715]: I0313 12:52:53.400146 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 12:52:53.424359 master-0 kubenswrapper[19715]: I0313 12:52:53.424307 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-gbnht" Mar 13 12:52:53.466110 master-0 kubenswrapper[19715]: I0313 12:52:53.466055 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:52:53.475950 master-0 kubenswrapper[19715]: I0313 12:52:53.475894 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 12:52:53.520209 master-0 kubenswrapper[19715]: I0313 12:52:53.520045 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:52:53.546528 master-0 kubenswrapper[19715]: I0313 12:52:53.546448 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:52:53.568260 master-0 kubenswrapper[19715]: I0313 12:52:53.568211 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 12:52:53.569614 master-0 kubenswrapper[19715]: I0313 12:52:53.569542 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:52:53.602073 master-0 kubenswrapper[19715]: I0313 12:52:53.602017 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zllxz" Mar 13 12:52:53.605034 master-0 kubenswrapper[19715]: I0313 12:52:53.604990 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:52:53.615005 master-0 kubenswrapper[19715]: I0313 12:52:53.614916 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:52:53.661319 master-0 kubenswrapper[19715]: I0313 12:52:53.661275 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:52:53.702997 master-0 kubenswrapper[19715]: I0313 12:52:53.702898 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:52:53.711161 master-0 kubenswrapper[19715]: I0313 12:52:53.711101 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-78fwj" Mar 13 12:52:53.734258 master-0 kubenswrapper[19715]: I0313 12:52:53.734197 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:52:53.776306 master-0 kubenswrapper[19715]: I0313 12:52:53.776127 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:52:53.790081 master-0 kubenswrapper[19715]: I0313 12:52:53.790025 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:52:53.850626 master-0 kubenswrapper[19715]: I0313 12:52:53.850562 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 12:52:53.932884 master-0 kubenswrapper[19715]: I0313 12:52:53.932836 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:52:53.935282 master-0 kubenswrapper[19715]: I0313 12:52:53.935254 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:52:54.043467 master-0 kubenswrapper[19715]: I0313 12:52:54.043293 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:52:54.064942 master-0 kubenswrapper[19715]: I0313 12:52:54.064884 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 12:52:54.084791 master-0 kubenswrapper[19715]: I0313 12:52:54.084719 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fbzjs" Mar 13 12:52:54.090686 master-0 kubenswrapper[19715]: I0313 12:52:54.090535 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:52:54.149151 master-0 kubenswrapper[19715]: I0313 12:52:54.149094 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:52:54.173371 master-0 kubenswrapper[19715]: I0313 12:52:54.173245 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:52:54.204846 master-0 kubenswrapper[19715]: I0313 12:52:54.201705 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 12:52:54.223190 master-0 kubenswrapper[19715]: I0313 12:52:54.223133 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-g589p" Mar 13 12:52:54.248905 master-0 kubenswrapper[19715]: I0313 12:52:54.248815 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:52:54.302511 master-0 kubenswrapper[19715]: I0313 12:52:54.301916 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:52:54.373435 master-0 kubenswrapper[19715]: I0313 12:52:54.372744 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:52:54.374340 master-0 kubenswrapper[19715]: I0313 12:52:54.374293 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:52:54.463144 master-0 kubenswrapper[19715]: I0313 12:52:54.463061 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:52:54.656082 master-0 kubenswrapper[19715]: I0313 12:52:54.612743 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:52:54.659341 master-0 kubenswrapper[19715]: I0313 12:52:54.657646 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 12:52:54.727265 master-0 kubenswrapper[19715]: I0313 12:52:54.727196 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:52:54.797358 master-0 kubenswrapper[19715]: I0313 12:52:54.797289 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dh2b7" Mar 13 12:52:54.805289 master-0 kubenswrapper[19715]: I0313 12:52:54.805201 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:52:54.817979 master-0 kubenswrapper[19715]: I0313 12:52:54.817903 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gft2f" Mar 13 12:52:54.849857 master-0 kubenswrapper[19715]: I0313 12:52:54.849790 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 12:52:54.921562 master-0 kubenswrapper[19715]: I0313 12:52:54.921426 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:52:54.931415 master-0 kubenswrapper[19715]: I0313 12:52:54.931356 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:52:54.955684 master-0 kubenswrapper[19715]: I0313 12:52:54.949364 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:52:54.955684 master-0 kubenswrapper[19715]: I0313 12:52:54.955482 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:52:54.965941 master-0 kubenswrapper[19715]: I0313 12:52:54.965389 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:52:54.976501 master-0 kubenswrapper[19715]: I0313 12:52:54.976451 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:52:55.010101 master-0 kubenswrapper[19715]: I0313 12:52:55.010041 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:52:55.021774 master-0 kubenswrapper[19715]: I0313 12:52:55.021709 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:52:55.047954 master-0 kubenswrapper[19715]: I0313 12:52:55.047612 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:52:55.058874 master-0 kubenswrapper[19715]: I0313 12:52:55.058805 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:52:55.105028 master-0 kubenswrapper[19715]: I0313 12:52:55.104971 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:52:55.121519 master-0 kubenswrapper[19715]: I0313 12:52:55.121416 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-8qlr6" Mar 13 12:52:55.165414 master-0 kubenswrapper[19715]: I0313 12:52:55.165341 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:52:55.198472 master-0 kubenswrapper[19715]: I0313 12:52:55.198298 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:52:55.241019 master-0 kubenswrapper[19715]: I0313 12:52:55.240957 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:52:55.272330 master-0 kubenswrapper[19715]: I0313 12:52:55.272269 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:52:55.279707 master-0 kubenswrapper[19715]: I0313 12:52:55.279657 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:52:55.381352 master-0 kubenswrapper[19715]: I0313 12:52:55.381286 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:52:55.389246 master-0 kubenswrapper[19715]: I0313 12:52:55.389170 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:52:55.432968 master-0 kubenswrapper[19715]: I0313 12:52:55.432870 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:52:55.518337 master-0 kubenswrapper[19715]: I0313 12:52:55.518195 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 12:52:55.612125 master-0 kubenswrapper[19715]: I0313 12:52:55.611912 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:52:55.612398 master-0 kubenswrapper[19715]: I0313 12:52:55.612183 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:52:55.650647 master-0 kubenswrapper[19715]: I0313 12:52:55.650554 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8qwx8" Mar 13 12:52:55.671122 master-0 kubenswrapper[19715]: I0313 12:52:55.670916 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:52:55.673081 master-0 kubenswrapper[19715]: I0313 12:52:55.672930 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:52:55.689208 master-0 kubenswrapper[19715]: I0313 12:52:55.689163 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:52:55.703676 master-0 kubenswrapper[19715]: I0313 12:52:55.703632 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 12:52:55.709135 master-0 kubenswrapper[19715]: I0313 12:52:55.709066 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:52:55.725199 master-0 kubenswrapper[19715]: I0313 12:52:55.725138 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:52:55.764486 master-0 kubenswrapper[19715]: I0313 12:52:55.764412 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:52:55.814896 master-0 kubenswrapper[19715]: I0313 12:52:55.814774 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:52:55.917163 master-0 kubenswrapper[19715]: I0313 12:52:55.917084 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:52:55.926105 master-0 kubenswrapper[19715]: I0313 12:52:55.926038 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:52:55.991330 master-0 kubenswrapper[19715]: I0313 12:52:55.990539 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:52:56.027976 master-0 kubenswrapper[19715]: I0313 12:52:56.027915 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:52:56.134663 master-0 kubenswrapper[19715]: I0313 12:52:56.134616 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:52:56.149485 master-0 kubenswrapper[19715]: I0313 12:52:56.149400 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:52:56.157275 master-0 kubenswrapper[19715]: I0313 12:52:56.157217 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:52:56.187721 master-0 kubenswrapper[19715]: I0313 12:52:56.187560 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:52:56.233604 master-0 kubenswrapper[19715]: I0313 12:52:56.233521 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:52:56.360953 master-0 kubenswrapper[19715]: I0313 12:52:56.360905 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:52:56.415929 master-0 kubenswrapper[19715]: I0313 12:52:56.415794 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:52:56.430401 master-0 kubenswrapper[19715]: I0313 12:52:56.428988 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:52:56.539610 master-0 kubenswrapper[19715]: I0313 12:52:56.536168 19715 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:52:56.540276 master-0 kubenswrapper[19715]: I0313 12:52:56.540228 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-wsv7b" Mar 13 12:52:56.540984 master-0 kubenswrapper[19715]: I0313 12:52:56.540960 19715 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:52:56.582855 master-0 kubenswrapper[19715]: I0313 12:52:56.582785 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 12:52:56.609211 master-0 kubenswrapper[19715]: I0313 12:52:56.609131 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:52:56.631923 master-0 kubenswrapper[19715]: I0313 12:52:56.631859 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:52:56.762090 master-0 kubenswrapper[19715]: I0313 12:52:56.761958 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:52:56.874534 master-0 kubenswrapper[19715]: I0313 12:52:56.874474 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:52:56.906366 master-0 kubenswrapper[19715]: I0313 12:52:56.906293 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:52:56.933935 master-0 kubenswrapper[19715]: I0313 12:52:56.933759 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:52:56.960253 master-0 kubenswrapper[19715]: I0313 12:52:56.960170 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mwrx7" Mar 13 12:52:56.967946 master-0 kubenswrapper[19715]: I0313 12:52:56.967911 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:52:57.062175 master-0 kubenswrapper[19715]: I0313 12:52:57.062033 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:52:57.183743 master-0 kubenswrapper[19715]: I0313 12:52:57.183705 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:52:57.293342 master-0 kubenswrapper[19715]: I0313 12:52:57.293291 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:52:57.326571 master-0 kubenswrapper[19715]: I0313 12:52:57.325910 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:52:57.364166 master-0 kubenswrapper[19715]: I0313 12:52:57.364096 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:52:57.418363 master-0 kubenswrapper[19715]: I0313 12:52:57.418278 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:52:57.452507 master-0 kubenswrapper[19715]: I0313 12:52:57.452428 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:52:57.544916 master-0 kubenswrapper[19715]: I0313 12:52:57.544852 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:52:57.585639 master-0 kubenswrapper[19715]: I0313 12:52:57.585461 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:52:57.604756 master-0 kubenswrapper[19715]: I0313 12:52:57.604689 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:52:57.605190 master-0 kubenswrapper[19715]: I0313 12:52:57.605105 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" containerID="cri-o://4b40357715494cbae0cee70bec112e496fcdeddba27e7b49134620a4e190c738" gracePeriod=5 Mar 13 12:52:57.617565 master-0 kubenswrapper[19715]: I0313 12:52:57.617504 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:52:57.688777 master-0 kubenswrapper[19715]: I0313 12:52:57.688691 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 12:52:57.694990 master-0 kubenswrapper[19715]: I0313 12:52:57.694905 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:52:57.715842 master-0 kubenswrapper[19715]: I0313 12:52:57.715778 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:52:57.751350 master-0 kubenswrapper[19715]: I0313 12:52:57.751280 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:52:57.784795 master-0 kubenswrapper[19715]: I0313 12:52:57.784720 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:52:57.820884 master-0 kubenswrapper[19715]: I0313 12:52:57.820813 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:52:57.832907 master-0 kubenswrapper[19715]: I0313 12:52:57.832836 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:52:57.861878 master-0 kubenswrapper[19715]: I0313 12:52:57.861720 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:52:57.863978 master-0 kubenswrapper[19715]: I0313 12:52:57.863937 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4f2vw" Mar 13 12:52:57.882253 master-0 kubenswrapper[19715]: I0313 12:52:57.882175 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:52:57.924296 master-0 kubenswrapper[19715]: I0313 12:52:57.924210 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-qggps" Mar 13 12:52:58.028922 master-0 kubenswrapper[19715]: I0313 12:52:58.028866 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:52:58.090109 master-0 kubenswrapper[19715]: I0313 12:52:58.090039 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6slw7" Mar 13 12:52:58.157425 master-0 kubenswrapper[19715]: I0313 12:52:58.157350 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:52:58.182210 master-0 kubenswrapper[19715]: I0313 12:52:58.182120 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 12:52:58.273941 master-0 kubenswrapper[19715]: I0313 12:52:58.273886 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:52:58.304503 master-0 kubenswrapper[19715]: I0313 12:52:58.304391 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:52:58.316989 master-0 kubenswrapper[19715]: I0313 12:52:58.316928 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-5lcmq" Mar 13 12:52:58.340151 master-0 kubenswrapper[19715]: I0313 12:52:58.340078 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:52:58.340409 master-0 kubenswrapper[19715]: I0313 12:52:58.340320 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:52:58.396371 master-0 kubenswrapper[19715]: I0313 12:52:58.396300 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:52:58.418073 master-0 kubenswrapper[19715]: I0313 12:52:58.417933 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:52:58.456813 master-0 kubenswrapper[19715]: I0313 12:52:58.456760 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:52:58.489177 master-0 kubenswrapper[19715]: I0313 12:52:58.489109 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:52:58.495204 master-0 kubenswrapper[19715]: I0313 12:52:58.495152 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:52:58.497537 master-0 kubenswrapper[19715]: I0313 12:52:58.497496 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:52:58.546686 master-0 kubenswrapper[19715]: I0313 12:52:58.546620 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zzprz" Mar 13 12:52:58.561552 master-0 kubenswrapper[19715]: I0313 12:52:58.561470 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:52:58.613943 master-0 kubenswrapper[19715]: I0313 12:52:58.613894 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:52:58.621289 master-0 kubenswrapper[19715]: I0313 12:52:58.621243 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:52:58.629898 master-0 kubenswrapper[19715]: I0313 12:52:58.629831 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:52:58.645519 master-0 kubenswrapper[19715]: I0313 12:52:58.645452 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:52:58.687621 master-0 kubenswrapper[19715]: I0313 12:52:58.687443 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:52:58.694065 master-0 kubenswrapper[19715]: I0313 12:52:58.694003 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:52:58.732869 master-0 kubenswrapper[19715]: I0313 12:52:58.732809 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-84k2dnesbumig" Mar 13 12:52:58.918023 master-0 kubenswrapper[19715]: I0313 12:52:58.917954 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:52:58.938991 master-0 kubenswrapper[19715]: I0313 12:52:58.938854 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:52:59.047131 master-0 kubenswrapper[19715]: I0313 12:52:59.047046 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:52:59.075704 master-0 kubenswrapper[19715]: I0313 12:52:59.075625 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:52:59.094127 master-0 kubenswrapper[19715]: I0313 12:52:59.094027 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:52:59.097712 master-0 kubenswrapper[19715]: I0313 12:52:59.097656 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:52:59.198407 master-0 kubenswrapper[19715]: I0313 12:52:59.198354 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7pbjup2gcsfqa" Mar 13 12:52:59.407777 master-0 kubenswrapper[19715]: I0313 12:52:59.407602 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:52:59.466636 master-0 kubenswrapper[19715]: I0313 12:52:59.466464 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:52:59.604126 master-0 kubenswrapper[19715]: I0313 12:52:59.604082 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-2tphk" Mar 13 12:52:59.605694 master-0 kubenswrapper[19715]: I0313 12:52:59.605660 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:52:59.809527 master-0 kubenswrapper[19715]: I0313 12:52:59.809392 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:52:59.810135 master-0 kubenswrapper[19715]: I0313 12:52:59.810082 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:52:59.878742 master-0 kubenswrapper[19715]: I0313 12:52:59.878697 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:52:59.909662 master-0 kubenswrapper[19715]: I0313 12:52:59.909615 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:52:59.940998 master-0 kubenswrapper[19715]: I0313 12:52:59.940942 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:53:00.028706 master-0 kubenswrapper[19715]: I0313 12:53:00.028604 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:53:00.126888 master-0 kubenswrapper[19715]: I0313 12:53:00.126708 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:53:00.186101 master-0 kubenswrapper[19715]: I0313 12:53:00.186055 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:53:00.362530 master-0 kubenswrapper[19715]: I0313 12:53:00.362456 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:53:00.445713 master-0 kubenswrapper[19715]: I0313 12:53:00.445636 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:53:00.464623 master-0 kubenswrapper[19715]: I0313 12:53:00.464518 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:53:00.530107 master-0 kubenswrapper[19715]: I0313 12:53:00.530032 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:00.530408 master-0 kubenswrapper[19715]: I0313 12:53:00.530113 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:00.595086 master-0 kubenswrapper[19715]: I0313 12:53:00.594994 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7fzhf" Mar 13 12:53:00.676937 master-0 kubenswrapper[19715]: I0313 12:53:00.676883 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:53:00.786433 master-0 kubenswrapper[19715]: I0313 12:53:00.786320 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:53:00.949321 master-0 kubenswrapper[19715]: I0313 12:53:00.949257 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7gz29" Mar 13 12:53:01.374387 master-0 kubenswrapper[19715]: I0313 12:53:01.374312 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:53:01.414749 master-0 kubenswrapper[19715]: I0313 12:53:01.414695 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:53:01.811894 master-0 kubenswrapper[19715]: I0313 12:53:01.811821 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:53:01.875812 master-0 kubenswrapper[19715]: I0313 12:53:01.875750 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:53:01.987214 master-0 kubenswrapper[19715]: I0313 12:53:01.987167 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xg9t5" Mar 13 12:53:02.083022 master-0 kubenswrapper[19715]: I0313 12:53:02.082896 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kcbnp" Mar 13 12:53:02.099970 master-0 kubenswrapper[19715]: I0313 12:53:02.099921 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 12:53:02.434888 master-0 kubenswrapper[19715]: I0313 12:53:02.434833 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:53:02.845810 master-0 kubenswrapper[19715]: I0313 12:53:02.845706 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_acbb43bf2cf27ed60d1f635fd6638ac7/startup-monitor/0.log" Mar 13 12:53:02.845810 master-0 kubenswrapper[19715]: I0313 12:53:02.845778 19715 generic.go:334] "Generic (PLEG): container finished" podID="acbb43bf2cf27ed60d1f635fd6638ac7" containerID="4b40357715494cbae0cee70bec112e496fcdeddba27e7b49134620a4e190c738" exitCode=137 Mar 13 12:53:03.183955 master-0 kubenswrapper[19715]: I0313 12:53:03.183885 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_acbb43bf2cf27ed60d1f635fd6638ac7/startup-monitor/0.log" Mar 13 12:53:03.184245 master-0 kubenswrapper[19715]: I0313 12:53:03.184038 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:03.333406 master-0 kubenswrapper[19715]: I0313 12:53:03.333340 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 13 12:53:03.333406 master-0 kubenswrapper[19715]: I0313 12:53:03.333401 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 13 12:53:03.333814 master-0 kubenswrapper[19715]: I0313 12:53:03.333488 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 13 12:53:03.333814 master-0 kubenswrapper[19715]: I0313 12:53:03.333595 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 13 12:53:03.333814 master-0 kubenswrapper[19715]: I0313 12:53:03.333618 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") pod \"acbb43bf2cf27ed60d1f635fd6638ac7\" (UID: \"acbb43bf2cf27ed60d1f635fd6638ac7\") " Mar 13 12:53:03.334108 master-0 kubenswrapper[19715]: I0313 12:53:03.334076 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests" (OuterVolumeSpecName: "manifests") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:03.334228 master-0 kubenswrapper[19715]: I0313 12:53:03.334076 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:03.334351 master-0 kubenswrapper[19715]: I0313 12:53:03.334090 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log" (OuterVolumeSpecName: "var-log") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:03.334446 master-0 kubenswrapper[19715]: I0313 12:53:03.334118 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock" (OuterVolumeSpecName: "var-lock") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:03.339109 master-0 kubenswrapper[19715]: I0313 12:53:03.339044 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "acbb43bf2cf27ed60d1f635fd6638ac7" (UID: "acbb43bf2cf27ed60d1f635fd6638ac7"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:03.435119 master-0 kubenswrapper[19715]: I0313 12:53:03.435048 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:03.435119 master-0 kubenswrapper[19715]: I0313 12:53:03.435100 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:03.435119 master-0 kubenswrapper[19715]: I0313 12:53:03.435115 19715 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:03.435119 master-0 kubenswrapper[19715]: I0313 12:53:03.435131 19715 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:03.435119 master-0 kubenswrapper[19715]: I0313 12:53:03.435145 19715 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/acbb43bf2cf27ed60d1f635fd6638ac7-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:03.707252 master-0 kubenswrapper[19715]: I0313 12:53:03.705759 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" path="/var/lib/kubelet/pods/acbb43bf2cf27ed60d1f635fd6638ac7/volumes" Mar 13 12:53:03.855890 master-0 kubenswrapper[19715]: I0313 12:53:03.855838 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_acbb43bf2cf27ed60d1f635fd6638ac7/startup-monitor/0.log" Mar 13 12:53:03.856183 master-0 kubenswrapper[19715]: I0313 12:53:03.855986 19715 scope.go:117] "RemoveContainer" containerID="4b40357715494cbae0cee70bec112e496fcdeddba27e7b49134620a4e190c738" Mar 13 12:53:03.856183 master-0 kubenswrapper[19715]: I0313 12:53:03.856024 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:53:10.529948 master-0 kubenswrapper[19715]: I0313 12:53:10.529885 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:10.530673 master-0 kubenswrapper[19715]: I0313 12:53:10.529954 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:15.941600 master-0 kubenswrapper[19715]: I0313 12:53:15.941533 19715 generic.go:334] "Generic (PLEG): container finished" podID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerID="78ae5f5f6dbecb618369b89512191ed3dcff14b5aecf6f0222631f845d48f587" exitCode=0 Mar 13 12:53:15.942244 master-0 kubenswrapper[19715]: I0313 12:53:15.941625 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerDied","Data":"78ae5f5f6dbecb618369b89512191ed3dcff14b5aecf6f0222631f845d48f587"} Mar 13 12:53:15.942409 master-0 kubenswrapper[19715]: I0313 12:53:15.942394 19715 scope.go:117] "RemoveContainer" containerID="712ae7e99e5d583d4f1cf7b4f887ed7099fd3d43e3fe5272361b3bb4ea67be51" Mar 13 12:53:15.943054 master-0 kubenswrapper[19715]: I0313 12:53:15.943026 19715 scope.go:117] "RemoveContainer" containerID="78ae5f5f6dbecb618369b89512191ed3dcff14b5aecf6f0222631f845d48f587" Mar 13 12:53:16.951704 master-0 kubenswrapper[19715]: I0313 12:53:16.951633 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerStarted","Data":"82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1"} Mar 13 12:53:16.952363 master-0 kubenswrapper[19715]: I0313 12:53:16.951959 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:53:16.953440 master-0 kubenswrapper[19715]: I0313 12:53:16.953380 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:53:20.530065 master-0 kubenswrapper[19715]: I0313 12:53:20.529994 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:20.530742 master-0 kubenswrapper[19715]: I0313 12:53:20.530073 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:25.717889 master-0 kubenswrapper[19715]: I0313 12:53:25.717512 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:53:25.717889 master-0 kubenswrapper[19715]: I0313 12:53:25.717607 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:53:27.735962 master-0 kubenswrapper[19715]: I0313 12:53:27.735880 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:27.736707 master-0 kubenswrapper[19715]: I0313 12:53:27.736238 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" containerID="cri-o://e0df16178a78e597a7ee479c2a01d936d3b8faaeddfcab7a0e0bd1705858f6b0" gracePeriod=30 Mar 13 12:53:27.736707 master-0 kubenswrapper[19715]: I0313 12:53:27.736301 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" containerID="cri-o://5e26810c41b04d6b7b18d460530be0d6b5cfdaf88d1a68d92b5c14e7b7261ce3" gracePeriod=30 Mar 13 12:53:27.736707 master-0 kubenswrapper[19715]: I0313 12:53:27.736383 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" containerID="cri-o://9558436851ea5e9f09168e4882a85b318bea857709da4a1c87ae463ce4701ae4" gracePeriod=30 Mar 13 12:53:27.737570 master-0 kubenswrapper[19715]: I0313 12:53:27.737098 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:27.737817 master-0 kubenswrapper[19715]: E0313 12:53:27.737773 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 13 12:53:27.737817 master-0 kubenswrapper[19715]: I0313 12:53:27.737820 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: E0313 12:53:27.737837 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: I0313 12:53:27.737844 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: E0313 12:53:27.737856 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: I0313 12:53:27.737863 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: E0313 12:53:27.737904 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: I0313 12:53:27.737914 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: E0313 12:53:27.737934 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="wait-for-host-port" Mar 13 12:53:27.737948 master-0 kubenswrapper[19715]: I0313 12:53:27.737942 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="wait-for-host-port" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: E0313 12:53:27.737963 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" containerName="installer" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.737973 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" containerName="installer" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.738236 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="acbb43bf2cf27ed60d1f635fd6638ac7" containerName="startup-monitor" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.738255 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-recovery-controller" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.738294 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler-cert-syncer" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.738309 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="cc6e9ceb-c6bf-409f-b515-b441a94db482" containerName="installer" Mar 13 12:53:27.738324 master-0 kubenswrapper[19715]: I0313 12:53:27.738320 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="1453f6461bf5d599ad65a4656343ee91" containerName="kube-scheduler" Mar 13 12:53:27.918665 master-0 kubenswrapper[19715]: I0313 12:53:27.918565 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 13 12:53:27.920316 master-0 kubenswrapper[19715]: I0313 12:53:27.920262 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.923707 master-0 kubenswrapper[19715]: I0313 12:53:27.923649 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.923958 master-0 kubenswrapper[19715]: I0313 12:53:27.923762 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:27.924455 master-0 kubenswrapper[19715]: I0313 12:53:27.924392 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 12:53:28.025245 master-0 kubenswrapper[19715]: I0313 12:53:28.025008 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"1453f6461bf5d599ad65a4656343ee91\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " Mar 13 12:53:28.025245 master-0 kubenswrapper[19715]: I0313 12:53:28.025194 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"1453f6461bf5d599ad65a4656343ee91\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " Mar 13 12:53:28.025676 master-0 kubenswrapper[19715]: I0313 12:53:28.025176 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1453f6461bf5d599ad65a4656343ee91" (UID: "1453f6461bf5d599ad65a4656343ee91"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:28.025676 master-0 kubenswrapper[19715]: I0313 12:53:28.025263 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1453f6461bf5d599ad65a4656343ee91" (UID: "1453f6461bf5d599ad65a4656343ee91"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:28.025676 master-0 kubenswrapper[19715]: I0313 12:53:28.025537 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:28.025676 master-0 kubenswrapper[19715]: I0313 12:53:28.025652 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:28.025856 master-0 kubenswrapper[19715]: I0313 12:53:28.025727 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:28.025856 master-0 kubenswrapper[19715]: I0313 12:53:28.025747 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:28.025856 master-0 kubenswrapper[19715]: I0313 12:53:28.025751 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:28.025856 master-0 kubenswrapper[19715]: I0313 12:53:28.025819 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/aa6a75ab47c06be4e74d05f552da4470-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"aa6a75ab47c06be4e74d05f552da4470\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:28.031698 master-0 kubenswrapper[19715]: I0313 12:53:28.031644 19715 generic.go:334] "Generic (PLEG): container finished" podID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" containerID="8ba50dfd5da7175c582a40fb68181b94c7be7b21b0a5b5383ecfe8762b301384" exitCode=0 Mar 13 12:53:28.031845 master-0 kubenswrapper[19715]: I0313 12:53:28.031737 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"b5f67c2e-1d8e-4315-bef7-c8015516cae0","Type":"ContainerDied","Data":"8ba50dfd5da7175c582a40fb68181b94c7be7b21b0a5b5383ecfe8762b301384"} Mar 13 12:53:28.035750 master-0 kubenswrapper[19715]: I0313 12:53:28.035670 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 13 12:53:28.038293 master-0 kubenswrapper[19715]: I0313 12:53:28.038243 19715 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="5e26810c41b04d6b7b18d460530be0d6b5cfdaf88d1a68d92b5c14e7b7261ce3" exitCode=0 Mar 13 12:53:28.038293 master-0 kubenswrapper[19715]: I0313 12:53:28.038276 19715 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="9558436851ea5e9f09168e4882a85b318bea857709da4a1c87ae463ce4701ae4" exitCode=2 Mar 13 12:53:28.038293 master-0 kubenswrapper[19715]: I0313 12:53:28.038293 19715 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="e0df16178a78e597a7ee479c2a01d936d3b8faaeddfcab7a0e0bd1705858f6b0" exitCode=0 Mar 13 12:53:28.038466 master-0 kubenswrapper[19715]: I0313 12:53:28.038316 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:28.038466 master-0 kubenswrapper[19715]: I0313 12:53:28.038334 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eef54790236aafb1ff6e4d20cddad15b6274928c80b4b8b66f54b00403de14ff" Mar 13 12:53:28.057984 master-0 kubenswrapper[19715]: I0313 12:53:28.057915 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 12:53:28.066178 master-0 kubenswrapper[19715]: I0313 12:53:28.066096 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1453f6461bf5d599ad65a4656343ee91" podUID="aa6a75ab47c06be4e74d05f552da4470" Mar 13 12:53:29.317595 master-0 kubenswrapper[19715]: I0313 12:53:29.317519 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:29.318026 master-0 kubenswrapper[19715]: I0313 12:53:29.317881 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e95e98146cf857064826636918715dbe" containerName="cluster-policy-controller" containerID="cri-o://c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" gracePeriod=30 Mar 13 12:53:29.318109 master-0 kubenswrapper[19715]: I0313 12:53:29.318068 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" containerID="cri-o://bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" gracePeriod=30 Mar 13 12:53:29.318206 master-0 kubenswrapper[19715]: I0313 12:53:29.318149 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" gracePeriod=30 Mar 13 12:53:29.318265 master-0 kubenswrapper[19715]: I0313 12:53:29.318215 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" gracePeriod=30 Mar 13 12:53:29.319031 master-0 kubenswrapper[19715]: I0313 12:53:29.318914 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:29.319321 master-0 kubenswrapper[19715]: E0313 12:53:29.319286 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:29.319321 master-0 kubenswrapper[19715]: I0313 12:53:29.319314 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: E0313 12:53:29.319337 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95e98146cf857064826636918715dbe" containerName="cluster-policy-controller" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: I0313 12:53:29.319345 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95e98146cf857064826636918715dbe" containerName="cluster-policy-controller" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: E0313 12:53:29.319379 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: I0313 12:53:29.319387 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: E0313 12:53:29.319395 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319402 master-0 kubenswrapper[19715]: I0313 12:53:29.319402 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: E0313 12:53:29.319416 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: I0313 12:53:29.319425 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: I0313 12:53:29.319617 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-recovery-controller" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: I0313 12:53:29.319647 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: I0313 12:53:29.319667 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95e98146cf857064826636918715dbe" containerName="cluster-policy-controller" Mar 13 12:53:29.319680 master-0 kubenswrapper[19715]: I0313 12:53:29.319679 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager" Mar 13 12:53:29.319869 master-0 kubenswrapper[19715]: I0313 12:53:29.319699 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="e95e98146cf857064826636918715dbe" containerName="kube-controller-manager-cert-syncer" Mar 13 12:53:29.460592 master-0 kubenswrapper[19715]: I0313 12:53:29.459664 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.460592 master-0 kubenswrapper[19715]: I0313 12:53:29.459794 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.491708 master-0 kubenswrapper[19715]: I0313 12:53:29.491628 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:53:29.496923 master-0 kubenswrapper[19715]: I0313 12:53:29.496405 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e95e98146cf857064826636918715dbe" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" Mar 13 12:53:29.501200 master-0 kubenswrapper[19715]: I0313 12:53:29.501110 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager-cert-syncer/0.log" Mar 13 12:53:29.503128 master-0 kubenswrapper[19715]: I0313 12:53:29.503066 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager/0.log" Mar 13 12:53:29.503242 master-0 kubenswrapper[19715]: I0313 12:53:29.503217 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.517891 master-0 kubenswrapper[19715]: I0313 12:53:29.517830 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e95e98146cf857064826636918715dbe" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" Mar 13 12:53:29.562544 master-0 kubenswrapper[19715]: I0313 12:53:29.562044 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.562544 master-0 kubenswrapper[19715]: I0313 12:53:29.562136 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.562544 master-0 kubenswrapper[19715]: I0313 12:53:29.562231 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.562544 master-0 kubenswrapper[19715]: I0313 12:53:29.562419 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:29.664548 master-0 kubenswrapper[19715]: I0313 12:53:29.664456 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock\") pod \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " Mar 13 12:53:29.664853 master-0 kubenswrapper[19715]: I0313 12:53:29.664617 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir\") pod \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " Mar 13 12:53:29.664853 master-0 kubenswrapper[19715]: I0313 12:53:29.664740 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access\") pod \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\" (UID: \"b5f67c2e-1d8e-4315-bef7-c8015516cae0\") " Mar 13 12:53:29.664853 master-0 kubenswrapper[19715]: I0313 12:53:29.664803 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir\") pod \"e95e98146cf857064826636918715dbe\" (UID: \"e95e98146cf857064826636918715dbe\") " Mar 13 12:53:29.664853 master-0 kubenswrapper[19715]: I0313 12:53:29.664856 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir\") pod \"e95e98146cf857064826636918715dbe\" (UID: \"e95e98146cf857064826636918715dbe\") " Mar 13 12:53:29.665016 master-0 kubenswrapper[19715]: I0313 12:53:29.664939 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock" (OuterVolumeSpecName: "var-lock") pod "b5f67c2e-1d8e-4315-bef7-c8015516cae0" (UID: "b5f67c2e-1d8e-4315-bef7-c8015516cae0"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:29.665067 master-0 kubenswrapper[19715]: I0313 12:53:29.665026 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "e95e98146cf857064826636918715dbe" (UID: "e95e98146cf857064826636918715dbe"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:29.665136 master-0 kubenswrapper[19715]: I0313 12:53:29.665091 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "e95e98146cf857064826636918715dbe" (UID: "e95e98146cf857064826636918715dbe"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:29.665195 master-0 kubenswrapper[19715]: I0313 12:53:29.665155 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b5f67c2e-1d8e-4315-bef7-c8015516cae0" (UID: "b5f67c2e-1d8e-4315-bef7-c8015516cae0"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:29.665624 master-0 kubenswrapper[19715]: I0313 12:53:29.665594 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:29.665682 master-0 kubenswrapper[19715]: I0313 12:53:29.665620 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:29.665682 master-0 kubenswrapper[19715]: I0313 12:53:29.665636 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:29.665682 master-0 kubenswrapper[19715]: I0313 12:53:29.665645 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/e95e98146cf857064826636918715dbe-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:29.668869 master-0 kubenswrapper[19715]: I0313 12:53:29.668800 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b5f67c2e-1d8e-4315-bef7-c8015516cae0" (UID: "b5f67c2e-1d8e-4315-bef7-c8015516cae0"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:29.706803 master-0 kubenswrapper[19715]: I0313 12:53:29.706736 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1453f6461bf5d599ad65a4656343ee91" path="/var/lib/kubelet/pods/1453f6461bf5d599ad65a4656343ee91/volumes" Mar 13 12:53:29.707421 master-0 kubenswrapper[19715]: I0313 12:53:29.707385 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e95e98146cf857064826636918715dbe" path="/var/lib/kubelet/pods/e95e98146cf857064826636918715dbe/volumes" Mar 13 12:53:29.767508 master-0 kubenswrapper[19715]: I0313 12:53:29.767460 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b5f67c2e-1d8e-4315-bef7-c8015516cae0-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:30.055638 master-0 kubenswrapper[19715]: I0313 12:53:30.055526 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-6-master-0" event={"ID":"b5f67c2e-1d8e-4315-bef7-c8015516cae0","Type":"ContainerDied","Data":"d4fe0707150c31ec3ce06d06c1f4ca00d8cc66eff07e8cd7a34b7a0ce43d8df8"} Mar 13 12:53:30.055638 master-0 kubenswrapper[19715]: I0313 12:53:30.055639 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d4fe0707150c31ec3ce06d06c1f4ca00d8cc66eff07e8cd7a34b7a0ce43d8df8" Mar 13 12:53:30.056014 master-0 kubenswrapper[19715]: I0313 12:53:30.055717 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-6-master-0" Mar 13 12:53:30.058248 master-0 kubenswrapper[19715]: I0313 12:53:30.058205 19715 generic.go:334] "Generic (PLEG): container finished" podID="2a0e239c-fe39-43af-8b0a-2964897d8b92" containerID="eba3cbd82f9bf9c30a938f2bc8b36ef9213a9c14e54e26f316f0f6d0aad9232c" exitCode=0 Mar 13 12:53:30.058331 master-0 kubenswrapper[19715]: I0313 12:53:30.058257 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2a0e239c-fe39-43af-8b0a-2964897d8b92","Type":"ContainerDied","Data":"eba3cbd82f9bf9c30a938f2bc8b36ef9213a9c14e54e26f316f0f6d0aad9232c"} Mar 13 12:53:30.060996 master-0 kubenswrapper[19715]: I0313 12:53:30.060960 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager-cert-syncer/0.log" Mar 13 12:53:30.062020 master-0 kubenswrapper[19715]: I0313 12:53:30.061981 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_e95e98146cf857064826636918715dbe/kube-controller-manager/0.log" Mar 13 12:53:30.062086 master-0 kubenswrapper[19715]: I0313 12:53:30.062031 19715 generic.go:334] "Generic (PLEG): container finished" podID="e95e98146cf857064826636918715dbe" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" exitCode=0 Mar 13 12:53:30.062086 master-0 kubenswrapper[19715]: I0313 12:53:30.062050 19715 generic.go:334] "Generic (PLEG): container finished" podID="e95e98146cf857064826636918715dbe" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" exitCode=0 Mar 13 12:53:30.062086 master-0 kubenswrapper[19715]: I0313 12:53:30.062060 19715 generic.go:334] "Generic (PLEG): container finished" podID="e95e98146cf857064826636918715dbe" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" exitCode=2 Mar 13 12:53:30.062086 master-0 kubenswrapper[19715]: I0313 12:53:30.062068 19715 generic.go:334] "Generic (PLEG): container finished" podID="e95e98146cf857064826636918715dbe" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" exitCode=0 Mar 13 12:53:30.062539 master-0 kubenswrapper[19715]: I0313 12:53:30.062107 19715 scope.go:117] "RemoveContainer" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.062539 master-0 kubenswrapper[19715]: I0313 12:53:30.062212 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:30.080088 master-0 kubenswrapper[19715]: I0313 12:53:30.079980 19715 scope.go:117] "RemoveContainer" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.085230 master-0 kubenswrapper[19715]: I0313 12:53:30.085176 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="e95e98146cf857064826636918715dbe" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" Mar 13 12:53:30.103528 master-0 kubenswrapper[19715]: I0313 12:53:30.103458 19715 scope.go:117] "RemoveContainer" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.118105 master-0 kubenswrapper[19715]: I0313 12:53:30.118054 19715 scope.go:117] "RemoveContainer" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.134494 master-0 kubenswrapper[19715]: I0313 12:53:30.134455 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.148564 master-0 kubenswrapper[19715]: I0313 12:53:30.148509 19715 scope.go:117] "RemoveContainer" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.150022 master-0 kubenswrapper[19715]: E0313 12:53:30.149878 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": container with ID starting with bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a not found: ID does not exist" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.150022 master-0 kubenswrapper[19715]: I0313 12:53:30.149944 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a"} err="failed to get container status \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": rpc error: code = NotFound desc = could not find container \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": container with ID starting with bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a not found: ID does not exist" Mar 13 12:53:30.150022 master-0 kubenswrapper[19715]: I0313 12:53:30.149990 19715 scope.go:117] "RemoveContainer" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.150765 master-0 kubenswrapper[19715]: E0313 12:53:30.150677 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": container with ID starting with 5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597 not found: ID does not exist" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.150851 master-0 kubenswrapper[19715]: I0313 12:53:30.150751 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597"} err="failed to get container status \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": rpc error: code = NotFound desc = could not find container \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": container with ID starting with 5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597 not found: ID does not exist" Mar 13 12:53:30.150851 master-0 kubenswrapper[19715]: I0313 12:53:30.150793 19715 scope.go:117] "RemoveContainer" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.151211 master-0 kubenswrapper[19715]: E0313 12:53:30.151138 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": container with ID starting with 706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc not found: ID does not exist" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.151211 master-0 kubenswrapper[19715]: I0313 12:53:30.151183 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc"} err="failed to get container status \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": rpc error: code = NotFound desc = could not find container \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": container with ID starting with 706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc not found: ID does not exist" Mar 13 12:53:30.151211 master-0 kubenswrapper[19715]: I0313 12:53:30.151205 19715 scope.go:117] "RemoveContainer" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.152120 master-0 kubenswrapper[19715]: E0313 12:53:30.152060 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": container with ID starting with c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b not found: ID does not exist" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.152120 master-0 kubenswrapper[19715]: I0313 12:53:30.152109 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b"} err="failed to get container status \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": rpc error: code = NotFound desc = could not find container \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": container with ID starting with c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b not found: ID does not exist" Mar 13 12:53:30.152244 master-0 kubenswrapper[19715]: I0313 12:53:30.152129 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.152770 master-0 kubenswrapper[19715]: E0313 12:53:30.152728 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": container with ID starting with 05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1 not found: ID does not exist" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.152841 master-0 kubenswrapper[19715]: I0313 12:53:30.152770 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} err="failed to get container status \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": rpc error: code = NotFound desc = could not find container \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": container with ID starting with 05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1 not found: ID does not exist" Mar 13 12:53:30.152841 master-0 kubenswrapper[19715]: I0313 12:53:30.152804 19715 scope.go:117] "RemoveContainer" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.153256 master-0 kubenswrapper[19715]: I0313 12:53:30.153184 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a"} err="failed to get container status \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": rpc error: code = NotFound desc = could not find container \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": container with ID starting with bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a not found: ID does not exist" Mar 13 12:53:30.153304 master-0 kubenswrapper[19715]: I0313 12:53:30.153256 19715 scope.go:117] "RemoveContainer" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.153882 master-0 kubenswrapper[19715]: I0313 12:53:30.153845 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597"} err="failed to get container status \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": rpc error: code = NotFound desc = could not find container \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": container with ID starting with 5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597 not found: ID does not exist" Mar 13 12:53:30.153882 master-0 kubenswrapper[19715]: I0313 12:53:30.153877 19715 scope.go:117] "RemoveContainer" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.154436 master-0 kubenswrapper[19715]: I0313 12:53:30.154398 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc"} err="failed to get container status \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": rpc error: code = NotFound desc = could not find container \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": container with ID starting with 706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc not found: ID does not exist" Mar 13 12:53:30.154488 master-0 kubenswrapper[19715]: I0313 12:53:30.154435 19715 scope.go:117] "RemoveContainer" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.155041 master-0 kubenswrapper[19715]: I0313 12:53:30.155012 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b"} err="failed to get container status \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": rpc error: code = NotFound desc = could not find container \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": container with ID starting with c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b not found: ID does not exist" Mar 13 12:53:30.155041 master-0 kubenswrapper[19715]: I0313 12:53:30.155036 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.155318 master-0 kubenswrapper[19715]: I0313 12:53:30.155275 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} err="failed to get container status \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": rpc error: code = NotFound desc = could not find container \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": container with ID starting with 05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1 not found: ID does not exist" Mar 13 12:53:30.155363 master-0 kubenswrapper[19715]: I0313 12:53:30.155314 19715 scope.go:117] "RemoveContainer" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.155754 master-0 kubenswrapper[19715]: I0313 12:53:30.155683 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a"} err="failed to get container status \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": rpc error: code = NotFound desc = could not find container \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": container with ID starting with bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a not found: ID does not exist" Mar 13 12:53:30.155754 master-0 kubenswrapper[19715]: I0313 12:53:30.155742 19715 scope.go:117] "RemoveContainer" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.156298 master-0 kubenswrapper[19715]: I0313 12:53:30.156263 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597"} err="failed to get container status \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": rpc error: code = NotFound desc = could not find container \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": container with ID starting with 5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597 not found: ID does not exist" Mar 13 12:53:30.156298 master-0 kubenswrapper[19715]: I0313 12:53:30.156288 19715 scope.go:117] "RemoveContainer" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.156739 master-0 kubenswrapper[19715]: I0313 12:53:30.156694 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc"} err="failed to get container status \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": rpc error: code = NotFound desc = could not find container \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": container with ID starting with 706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc not found: ID does not exist" Mar 13 12:53:30.156739 master-0 kubenswrapper[19715]: I0313 12:53:30.156728 19715 scope.go:117] "RemoveContainer" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.157182 master-0 kubenswrapper[19715]: I0313 12:53:30.157149 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b"} err="failed to get container status \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": rpc error: code = NotFound desc = could not find container \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": container with ID starting with c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b not found: ID does not exist" Mar 13 12:53:30.157182 master-0 kubenswrapper[19715]: I0313 12:53:30.157175 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.157783 master-0 kubenswrapper[19715]: I0313 12:53:30.157654 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} err="failed to get container status \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": rpc error: code = NotFound desc = could not find container \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": container with ID starting with 05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1 not found: ID does not exist" Mar 13 12:53:30.157783 master-0 kubenswrapper[19715]: I0313 12:53:30.157748 19715 scope.go:117] "RemoveContainer" containerID="bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a" Mar 13 12:53:30.158421 master-0 kubenswrapper[19715]: I0313 12:53:30.158343 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a"} err="failed to get container status \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": rpc error: code = NotFound desc = could not find container \"bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a\": container with ID starting with bcae120562ebd6a080b18b133c1fc3327d4a1ab9452c71849c66e0fdaf7d770a not found: ID does not exist" Mar 13 12:53:30.158421 master-0 kubenswrapper[19715]: I0313 12:53:30.158385 19715 scope.go:117] "RemoveContainer" containerID="5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597" Mar 13 12:53:30.158797 master-0 kubenswrapper[19715]: I0313 12:53:30.158761 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597"} err="failed to get container status \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": rpc error: code = NotFound desc = could not find container \"5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597\": container with ID starting with 5356d0754ac8f08240d34df1037c169f09401a775bea3e1d3f1f746337106597 not found: ID does not exist" Mar 13 12:53:30.158797 master-0 kubenswrapper[19715]: I0313 12:53:30.158788 19715 scope.go:117] "RemoveContainer" containerID="706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc" Mar 13 12:53:30.159204 master-0 kubenswrapper[19715]: I0313 12:53:30.159169 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc"} err="failed to get container status \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": rpc error: code = NotFound desc = could not find container \"706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc\": container with ID starting with 706b7c6df3fd037109b099fd001db2608c613eb72cef971a2323672a9182a1fc not found: ID does not exist" Mar 13 12:53:30.159204 master-0 kubenswrapper[19715]: I0313 12:53:30.159192 19715 scope.go:117] "RemoveContainer" containerID="c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b" Mar 13 12:53:30.159627 master-0 kubenswrapper[19715]: I0313 12:53:30.159568 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b"} err="failed to get container status \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": rpc error: code = NotFound desc = could not find container \"c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b\": container with ID starting with c3b2820a2fc1b4e22a977335379f46f8937ed89de49ebde52af6071d0984510b not found: ID does not exist" Mar 13 12:53:30.159627 master-0 kubenswrapper[19715]: I0313 12:53:30.159615 19715 scope.go:117] "RemoveContainer" containerID="05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1" Mar 13 12:53:30.159998 master-0 kubenswrapper[19715]: I0313 12:53:30.159961 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1"} err="failed to get container status \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": rpc error: code = NotFound desc = could not find container \"05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1\": container with ID starting with 05cee67fdb12adbad50e5279899fdecb8cc2e756aa06ce3a3964df99001a8fb1 not found: ID does not exist" Mar 13 12:53:30.530472 master-0 kubenswrapper[19715]: I0313 12:53:30.530414 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:30.532043 master-0 kubenswrapper[19715]: I0313 12:53:30.530484 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:31.421932 master-0 kubenswrapper[19715]: I0313 12:53:31.421876 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:53:31.550261 master-0 kubenswrapper[19715]: I0313 12:53:31.550206 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock\") pod \"2a0e239c-fe39-43af-8b0a-2964897d8b92\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " Mar 13 12:53:31.551229 master-0 kubenswrapper[19715]: I0313 12:53:31.550427 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock" (OuterVolumeSpecName: "var-lock") pod "2a0e239c-fe39-43af-8b0a-2964897d8b92" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:31.551229 master-0 kubenswrapper[19715]: I0313 12:53:31.551023 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir\") pod \"2a0e239c-fe39-43af-8b0a-2964897d8b92\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " Mar 13 12:53:31.551229 master-0 kubenswrapper[19715]: I0313 12:53:31.551171 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a0e239c-fe39-43af-8b0a-2964897d8b92" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:53:31.551381 master-0 kubenswrapper[19715]: I0313 12:53:31.551241 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") pod \"2a0e239c-fe39-43af-8b0a-2964897d8b92\" (UID: \"2a0e239c-fe39-43af-8b0a-2964897d8b92\") " Mar 13 12:53:31.551848 master-0 kubenswrapper[19715]: I0313 12:53:31.551813 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:31.551848 master-0 kubenswrapper[19715]: I0313 12:53:31.551839 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a0e239c-fe39-43af-8b0a-2964897d8b92-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:31.554367 master-0 kubenswrapper[19715]: I0313 12:53:31.554329 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a0e239c-fe39-43af-8b0a-2964897d8b92" (UID: "2a0e239c-fe39-43af-8b0a-2964897d8b92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:53:31.652886 master-0 kubenswrapper[19715]: I0313 12:53:31.652806 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a0e239c-fe39-43af-8b0a-2964897d8b92-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:53:32.084376 master-0 kubenswrapper[19715]: I0313 12:53:32.084206 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-5-master-0" event={"ID":"2a0e239c-fe39-43af-8b0a-2964897d8b92","Type":"ContainerDied","Data":"e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02"} Mar 13 12:53:32.084376 master-0 kubenswrapper[19715]: I0313 12:53:32.084257 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e8ac18eee8b9aa5dba4108ca45bb042ac3de2b149c55fb5abfeb6a5c326a1b02" Mar 13 12:53:32.084376 master-0 kubenswrapper[19715]: I0313 12:53:32.084258 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-5-master-0" Mar 13 12:53:40.530841 master-0 kubenswrapper[19715]: I0313 12:53:40.530696 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:40.531546 master-0 kubenswrapper[19715]: I0313 12:53:40.530835 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:42.697687 master-0 kubenswrapper[19715]: I0313 12:53:42.697548 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:42.717851 master-0 kubenswrapper[19715]: I0313 12:53:42.716309 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="a8916fa5-daec-4b32-bd44-116fd6e2aad9" Mar 13 12:53:42.717851 master-0 kubenswrapper[19715]: I0313 12:53:42.716373 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="a8916fa5-daec-4b32-bd44-116fd6e2aad9" Mar 13 12:53:42.739333 master-0 kubenswrapper[19715]: I0313 12:53:42.739239 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:42.742276 master-0 kubenswrapper[19715]: I0313 12:53:42.742087 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:42.749698 master-0 kubenswrapper[19715]: I0313 12:53:42.748736 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:42.764090 master-0 kubenswrapper[19715]: I0313 12:53:42.761277 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:42.769761 master-0 kubenswrapper[19715]: I0313 12:53:42.769682 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 12:53:42.786566 master-0 kubenswrapper[19715]: W0313 12:53:42.786490 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podaa6a75ab47c06be4e74d05f552da4470.slice/crio-4c012e3a9d11391783f3bbc482a25ba65b8843a8e2a73ea75564e79c75aa7b1a WatchSource:0}: Error finding container 4c012e3a9d11391783f3bbc482a25ba65b8843a8e2a73ea75564e79c75aa7b1a: Status 404 returned error can't find the container with id 4c012e3a9d11391783f3bbc482a25ba65b8843a8e2a73ea75564e79c75aa7b1a Mar 13 12:53:42.890042 master-0 kubenswrapper[19715]: I0313 12:53:42.889948 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:53:42.890601 master-0 kubenswrapper[19715]: E0313 12:53:42.890478 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" containerName="installer" Mar 13 12:53:42.890924 master-0 kubenswrapper[19715]: I0313 12:53:42.890901 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" containerName="installer" Mar 13 12:53:42.890997 master-0 kubenswrapper[19715]: E0313 12:53:42.890947 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a0e239c-fe39-43af-8b0a-2964897d8b92" containerName="installer" Mar 13 12:53:42.890997 master-0 kubenswrapper[19715]: I0313 12:53:42.890957 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a0e239c-fe39-43af-8b0a-2964897d8b92" containerName="installer" Mar 13 12:53:42.891216 master-0 kubenswrapper[19715]: I0313 12:53:42.891187 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a0e239c-fe39-43af-8b0a-2964897d8b92" containerName="installer" Mar 13 12:53:42.891300 master-0 kubenswrapper[19715]: I0313 12:53:42.891218 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5f67c2e-1d8e-4315-bef7-c8015516cae0" containerName="installer" Mar 13 12:53:42.891947 master-0 kubenswrapper[19715]: I0313 12:53:42.891917 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:42.894099 master-0 kubenswrapper[19715]: I0313 12:53:42.894032 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:53:42.894215 master-0 kubenswrapper[19715]: I0313 12:53:42.894035 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-phtzh" Mar 13 12:53:42.898237 master-0 kubenswrapper[19715]: I0313 12:53:42.898168 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:53:42.934832 master-0 kubenswrapper[19715]: I0313 12:53:42.933800 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:42.934832 master-0 kubenswrapper[19715]: I0313 12:53:42.934031 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:42.934832 master-0 kubenswrapper[19715]: I0313 12:53:42.934151 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.038770 master-0 kubenswrapper[19715]: I0313 12:53:43.038694 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.040739 master-0 kubenswrapper[19715]: I0313 12:53:43.039204 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.040739 master-0 kubenswrapper[19715]: I0313 12:53:43.039392 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.040739 master-0 kubenswrapper[19715]: I0313 12:53:43.039488 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.040739 master-0 kubenswrapper[19715]: I0313 12:53:43.039546 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.060281 master-0 kubenswrapper[19715]: I0313 12:53:43.060206 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access\") pod \"installer-3-master-0\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.214022 master-0 kubenswrapper[19715]: I0313 12:53:43.213942 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:53:43.247801 master-0 kubenswrapper[19715]: I0313 12:53:43.247032 19715 generic.go:334] "Generic (PLEG): container finished" podID="aa6a75ab47c06be4e74d05f552da4470" containerID="ce402b44991e35bea610bfb8a762b43c513626e71ef412e855ef900dbff30d31" exitCode=0 Mar 13 12:53:43.247801 master-0 kubenswrapper[19715]: I0313 12:53:43.247105 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerDied","Data":"ce402b44991e35bea610bfb8a762b43c513626e71ef412e855ef900dbff30d31"} Mar 13 12:53:43.247801 master-0 kubenswrapper[19715]: I0313 12:53:43.247138 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"4c012e3a9d11391783f3bbc482a25ba65b8843a8e2a73ea75564e79c75aa7b1a"} Mar 13 12:53:43.638247 master-0 kubenswrapper[19715]: I0313 12:53:43.638176 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 12:53:44.259984 master-0 kubenswrapper[19715]: I0313 12:53:44.259885 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"cc260884a7c16c8eab6abcabda5ab4285c1f5c6c3b338cd1f595f61702ce9e21"} Mar 13 12:53:44.259984 master-0 kubenswrapper[19715]: I0313 12:53:44.259974 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"6468941934fea51ac3644d3ab2509dcf2d21f5c0a1cc5f567f122f8df9926d6b"} Mar 13 12:53:44.259984 master-0 kubenswrapper[19715]: I0313 12:53:44.259985 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"aa6a75ab47c06be4e74d05f552da4470","Type":"ContainerStarted","Data":"5d93b9d2eb27c76872c6374438d5fcf1b5bec3a41ddb9970c0d7c0d826b479b2"} Mar 13 12:53:44.260759 master-0 kubenswrapper[19715]: I0313 12:53:44.260043 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:53:44.261679 master-0 kubenswrapper[19715]: I0313 12:53:44.261646 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"139213ac-1249-40eb-853f-768a8c20f6cd","Type":"ContainerStarted","Data":"4cb40c8f5e350ae5b95cd6a7642b9e9dbc1392cf8857b0ffc73ed969ac802781"} Mar 13 12:53:44.261758 master-0 kubenswrapper[19715]: I0313 12:53:44.261681 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"139213ac-1249-40eb-853f-768a8c20f6cd","Type":"ContainerStarted","Data":"6242c6d4326ab01ab0a1a9b6c0e5fb24a858803e12883f2c03d3eb4ef6c31761"} Mar 13 12:53:44.283829 master-0 kubenswrapper[19715]: I0313 12:53:44.283666 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.283634775 podStartE2EDuration="2.283634775s" podCreationTimestamp="2026-03-13 12:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:44.281194929 +0000 UTC m=+250.847867706" watchObservedRunningTime="2026-03-13 12:53:44.283634775 +0000 UTC m=+250.850307542" Mar 13 12:53:44.302069 master-0 kubenswrapper[19715]: I0313 12:53:44.301978 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.301956019 podStartE2EDuration="2.301956019s" podCreationTimestamp="2026-03-13 12:53:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:44.298436449 +0000 UTC m=+250.865109226" watchObservedRunningTime="2026-03-13 12:53:44.301956019 +0000 UTC m=+250.868628776" Mar 13 12:53:44.696770 master-0 kubenswrapper[19715]: I0313 12:53:44.696648 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:44.714407 master-0 kubenswrapper[19715]: I0313 12:53:44.714301 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="aa22799b-01f7-4365-b725-097489495987" Mar 13 12:53:44.714407 master-0 kubenswrapper[19715]: I0313 12:53:44.714382 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="aa22799b-01f7-4365-b725-097489495987" Mar 13 12:53:44.728403 master-0 kubenswrapper[19715]: I0313 12:53:44.728342 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:44.731721 master-0 kubenswrapper[19715]: I0313 12:53:44.731657 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:44.736548 master-0 kubenswrapper[19715]: I0313 12:53:44.736481 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:44.740965 master-0 kubenswrapper[19715]: I0313 12:53:44.740906 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:44.745843 master-0 kubenswrapper[19715]: I0313 12:53:44.745784 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:53:45.295432 master-0 kubenswrapper[19715]: I0313 12:53:45.294852 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"79d2112532eb814b4ddc9e964815bfcf0f82c0b3839cd7c9db7b085901b612ca"} Mar 13 12:53:45.295432 master-0 kubenswrapper[19715]: I0313 12:53:45.294889 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"d5454f15f77b6cb561e7eb8617b14ada5ba037b44712b3329b67f9663d58382d"} Mar 13 12:53:46.303680 master-0 kubenswrapper[19715]: I0313 12:53:46.303607 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"3676744a93dc4b275eb6a7cc11028760f14bb722b4e049db371fc67c6d22dd94"} Mar 13 12:53:46.303680 master-0 kubenswrapper[19715]: I0313 12:53:46.303675 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"9727c8d2dac755dd7ea1b9ad8ff6c17a8b645c1accc27700962725c430cd1484"} Mar 13 12:53:46.303680 master-0 kubenswrapper[19715]: I0313 12:53:46.303689 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"6e9a116bda80ce7fe4e93d1c23741a0678a4bf66c268c954fc757c04183b5157"} Mar 13 12:53:46.330675 master-0 kubenswrapper[19715]: I0313 12:53:46.330539 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=2.33051561 podStartE2EDuration="2.33051561s" podCreationTimestamp="2026-03-13 12:53:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:53:46.324016376 +0000 UTC m=+252.890689143" watchObservedRunningTime="2026-03-13 12:53:46.33051561 +0000 UTC m=+252.897188367" Mar 13 12:53:50.530003 master-0 kubenswrapper[19715]: I0313 12:53:50.529912 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:53:50.530744 master-0 kubenswrapper[19715]: I0313 12:53:50.530006 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:53:54.758846 master-0 kubenswrapper[19715]: I0313 12:53:54.758538 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:54.758846 master-0 kubenswrapper[19715]: I0313 12:53:54.758843 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:54.761552 master-0 kubenswrapper[19715]: I0313 12:53:54.759740 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:54.761552 master-0 kubenswrapper[19715]: I0313 12:53:54.759881 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:54.765202 master-0 kubenswrapper[19715]: I0313 12:53:54.765146 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:54.772025 master-0 kubenswrapper[19715]: I0313 12:53:54.771871 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:55.371360 master-0 kubenswrapper[19715]: I0313 12:53:55.371287 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:55.372555 master-0 kubenswrapper[19715]: I0313 12:53:55.372522 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:53:55.718292 master-0 kubenswrapper[19715]: I0313 12:53:55.718199 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:53:55.718626 master-0 kubenswrapper[19715]: I0313 12:53:55.718304 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:54:00.530629 master-0 kubenswrapper[19715]: I0313 12:54:00.530415 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:00.530629 master-0 kubenswrapper[19715]: I0313 12:54:00.530560 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:01.336284 master-0 kubenswrapper[19715]: I0313 12:54:01.334845 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 12:54:01.351665 master-0 kubenswrapper[19715]: I0313 12:54:01.345832 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.369610 master-0 kubenswrapper[19715]: I0313 12:54:01.369012 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 12:54:01.542641 master-0 kubenswrapper[19715]: I0313 12:54:01.542521 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.542641 master-0 kubenswrapper[19715]: I0313 12:54:01.542638 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.543358 master-0 kubenswrapper[19715]: I0313 12:54:01.542664 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.543358 master-0 kubenswrapper[19715]: I0313 12:54:01.542686 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.543358 master-0 kubenswrapper[19715]: I0313 12:54:01.542726 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-js2jf\" (UniqueName: \"kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.543358 master-0 kubenswrapper[19715]: I0313 12:54:01.542785 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.543358 master-0 kubenswrapper[19715]: I0313 12:54:01.542845 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.644922 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.645881 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.645907 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.645944 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-js2jf\" (UniqueName: \"kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.646027 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.646205 master-0 kubenswrapper[19715]: I0313 12:54:01.646127 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.647258 master-0 kubenswrapper[19715]: I0313 12:54:01.647207 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.647811 master-0 kubenswrapper[19715]: I0313 12:54:01.647546 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.648875 master-0 kubenswrapper[19715]: I0313 12:54:01.648839 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.648991 master-0 kubenswrapper[19715]: I0313 12:54:01.648974 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.652620 master-0 kubenswrapper[19715]: I0313 12:54:01.650099 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.652620 master-0 kubenswrapper[19715]: I0313 12:54:01.651691 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.663836 master-0 kubenswrapper[19715]: I0313 12:54:01.662477 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.670766 master-0 kubenswrapper[19715]: I0313 12:54:01.670696 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-js2jf\" (UniqueName: \"kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf\") pod \"console-7fdf5454d9-tzhsm\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:01.773438 master-0 kubenswrapper[19715]: I0313 12:54:01.773368 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:02.290448 master-0 kubenswrapper[19715]: I0313 12:54:02.290376 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 12:54:02.298042 master-0 kubenswrapper[19715]: W0313 12:54:02.297966 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod897ba022_a904_4e2f_9317_d675122727fd.slice/crio-b6f6002eee7cb91a4bce1868d537e329216682ee1398b8b512268f5d8f40b09a WatchSource:0}: Error finding container b6f6002eee7cb91a4bce1868d537e329216682ee1398b8b512268f5d8f40b09a: Status 404 returned error can't find the container with id b6f6002eee7cb91a4bce1868d537e329216682ee1398b8b512268f5d8f40b09a Mar 13 12:54:02.463113 master-0 kubenswrapper[19715]: I0313 12:54:02.463040 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerStarted","Data":"b6f6002eee7cb91a4bce1868d537e329216682ee1398b8b512268f5d8f40b09a"} Mar 13 12:54:03.473190 master-0 kubenswrapper[19715]: I0313 12:54:03.473086 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerStarted","Data":"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af"} Mar 13 12:54:03.497651 master-0 kubenswrapper[19715]: I0313 12:54:03.497505 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7fdf5454d9-tzhsm" podStartSLOduration=2.497481188 podStartE2EDuration="2.497481188s" podCreationTimestamp="2026-03-13 12:54:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:54:03.493176133 +0000 UTC m=+270.059848920" watchObservedRunningTime="2026-03-13 12:54:03.497481188 +0000 UTC m=+270.064153955" Mar 13 12:54:09.848150 master-0 kubenswrapper[19715]: I0313 12:54:09.847931 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:09.848947 master-0 kubenswrapper[19715]: I0313 12:54:09.848804 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="thanos-sidecar" containerID="cri-o://9463824c4754254b1aa46bbaffa14766bfc6621f106bd61ec45a341f649f56be" gracePeriod=600 Mar 13 12:54:09.848947 master-0 kubenswrapper[19715]: I0313 12:54:09.848809 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-thanos" containerID="cri-o://ce6cbac40e6ff87a6089e3a83a73ca630aed4d9a458d7e6426b09c49aa6ea84d" gracePeriod=600 Mar 13 12:54:09.849097 master-0 kubenswrapper[19715]: I0313 12:54:09.848942 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-web" containerID="cri-o://394476f4de2eb5477b8f78919726c06de4a0720a783079391349dd5d72e44469" gracePeriod=600 Mar 13 12:54:09.849097 master-0 kubenswrapper[19715]: I0313 12:54:09.848794 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy" containerID="cri-o://5130b2fe955358251a5f9c45b6699a17c0118abcb7d3a20a19463e82bc49a603" gracePeriod=600 Mar 13 12:54:09.849097 master-0 kubenswrapper[19715]: I0313 12:54:09.849066 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="config-reloader" containerID="cri-o://b5d41bd31df12d6bde4b88fd262f4bd668a988bb2fa111efac5ce9627109d651" gracePeriod=600 Mar 13 12:54:09.849097 master-0 kubenswrapper[19715]: I0313 12:54:09.849087 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="prometheus" containerID="cri-o://6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" gracePeriod=600 Mar 13 12:54:10.530706 master-0 kubenswrapper[19715]: I0313 12:54:10.530520 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:10.530706 master-0 kubenswrapper[19715]: I0313 12:54:10.530666 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588038 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="ce6cbac40e6ff87a6089e3a83a73ca630aed4d9a458d7e6426b09c49aa6ea84d" exitCode=0 Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588083 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="5130b2fe955358251a5f9c45b6699a17c0118abcb7d3a20a19463e82bc49a603" exitCode=0 Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588096 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="394476f4de2eb5477b8f78919726c06de4a0720a783079391349dd5d72e44469" exitCode=0 Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588105 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="9463824c4754254b1aa46bbaffa14766bfc6621f106bd61ec45a341f649f56be" exitCode=0 Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588115 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="b5d41bd31df12d6bde4b88fd262f4bd668a988bb2fa111efac5ce9627109d651" exitCode=0 Mar 13 12:54:10.588111 master-0 kubenswrapper[19715]: I0313 12:54:10.588122 19715 generic.go:334] "Generic (PLEG): container finished" podID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerID="6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" exitCode=0 Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588145 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"ce6cbac40e6ff87a6089e3a83a73ca630aed4d9a458d7e6426b09c49aa6ea84d"} Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588182 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"5130b2fe955358251a5f9c45b6699a17c0118abcb7d3a20a19463e82bc49a603"} Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588192 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"394476f4de2eb5477b8f78919726c06de4a0720a783079391349dd5d72e44469"} Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588203 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"9463824c4754254b1aa46bbaffa14766bfc6621f106bd61ec45a341f649f56be"} Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588213 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"b5d41bd31df12d6bde4b88fd262f4bd668a988bb2fa111efac5ce9627109d651"} Mar 13 12:54:10.588595 master-0 kubenswrapper[19715]: I0313 12:54:10.588221 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7"} Mar 13 12:54:10.992083 master-0 kubenswrapper[19715]: E0313 12:54:10.992005 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7 is running failed: container process not found" containerID="6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:54:10.992930 master-0 kubenswrapper[19715]: E0313 12:54:10.992788 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7 is running failed: container process not found" containerID="6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:54:10.993344 master-0 kubenswrapper[19715]: E0313 12:54:10.993239 19715 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7 is running failed: container process not found" containerID="6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" cmd=["sh","-c","if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi"] Mar 13 12:54:10.993344 master-0 kubenswrapper[19715]: E0313 12:54:10.993276 19715 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7 is running failed: container process not found" probeType="Readiness" pod="openshift-monitoring/prometheus-k8s-0" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="prometheus" Mar 13 12:54:11.144618 master-0 kubenswrapper[19715]: I0313 12:54:11.137869 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:11.234393 master-0 kubenswrapper[19715]: I0313 12:54:11.234323 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234431 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234464 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234521 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234552 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234594 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234627 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234654 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.234703 master-0 kubenswrapper[19715]: I0313 12:54:11.234687 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234733 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234768 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234792 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234839 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234886 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234917 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235096 master-0 kubenswrapper[19715]: I0313 12:54:11.234972 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.235885 master-0 kubenswrapper[19715]: I0313 12:54:11.235840 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:54:11.236015 master-0 kubenswrapper[19715]: I0313 12:54:11.235973 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5m4l\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.236075 master-0 kubenswrapper[19715]: I0313 12:54:11.236037 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle\") pod \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\" (UID: \"50f9cfe2-048d-42c1-bd6c-30ab66b713d1\") " Mar 13 12:54:11.236484 master-0 kubenswrapper[19715]: I0313 12:54:11.236461 19715 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.237036 master-0 kubenswrapper[19715]: I0313 12:54:11.237006 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:54:11.238512 master-0 kubenswrapper[19715]: I0313 12:54:11.238485 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:54:11.239440 master-0 kubenswrapper[19715]: I0313 12:54:11.239400 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:54:11.241706 master-0 kubenswrapper[19715]: I0313 12:54:11.240521 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:54:11.241706 master-0 kubenswrapper[19715]: I0313 12:54:11.241585 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.241957 master-0 kubenswrapper[19715]: I0313 12:54:11.241878 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:54:11.242240 master-0 kubenswrapper[19715]: I0313 12:54:11.242199 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config" (OuterVolumeSpecName: "config") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.247811 master-0 kubenswrapper[19715]: I0313 12:54:11.247137 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.247811 master-0 kubenswrapper[19715]: I0313 12:54:11.247327 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out" (OuterVolumeSpecName: "config-out") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:54:11.247811 master-0 kubenswrapper[19715]: I0313 12:54:11.247362 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:54:11.247811 master-0 kubenswrapper[19715]: I0313 12:54:11.247524 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.249546 master-0 kubenswrapper[19715]: I0313 12:54:11.249493 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.253254 master-0 kubenswrapper[19715]: I0313 12:54:11.253205 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.253399 master-0 kubenswrapper[19715]: I0313 12:54:11.253335 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.254036 master-0 kubenswrapper[19715]: I0313 12:54:11.253981 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.261072 master-0 kubenswrapper[19715]: I0313 12:54:11.260851 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l" (OuterVolumeSpecName: "kube-api-access-j5m4l") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "kube-api-access-j5m4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:54:11.305993 master-0 kubenswrapper[19715]: I0313 12:54:11.305700 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config" (OuterVolumeSpecName: "web-config") pod "50f9cfe2-048d-42c1-bd6c-30ab66b713d1" (UID: "50f9cfe2-048d-42c1-bd6c-30ab66b713d1"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337653 19715 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337717 19715 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337739 19715 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337754 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5m4l\" (UniqueName: \"kubernetes.io/projected/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-kube-api-access-j5m4l\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337782 19715 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337799 19715 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337813 19715 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337829 19715 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337864 19715 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337882 19715 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337901 19715 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337914 19715 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337927 19715 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337939 19715 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337954 19715 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337965 19715 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.340471 master-0 kubenswrapper[19715]: I0313 12:54:11.337978 19715 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/50f9cfe2-048d-42c1-bd6c-30ab66b713d1-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:11.600482 master-0 kubenswrapper[19715]: I0313 12:54:11.600310 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"50f9cfe2-048d-42c1-bd6c-30ab66b713d1","Type":"ContainerDied","Data":"a6d4c32d41120afcac0120743281b178782a2ee49a010b4d81fdaac2526d5db1"} Mar 13 12:54:11.600482 master-0 kubenswrapper[19715]: I0313 12:54:11.600412 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:11.600482 master-0 kubenswrapper[19715]: I0313 12:54:11.600450 19715 scope.go:117] "RemoveContainer" containerID="ce6cbac40e6ff87a6089e3a83a73ca630aed4d9a458d7e6426b09c49aa6ea84d" Mar 13 12:54:11.621842 master-0 kubenswrapper[19715]: I0313 12:54:11.621792 19715 scope.go:117] "RemoveContainer" containerID="5130b2fe955358251a5f9c45b6699a17c0118abcb7d3a20a19463e82bc49a603" Mar 13 12:54:11.644628 master-0 kubenswrapper[19715]: I0313 12:54:11.644556 19715 scope.go:117] "RemoveContainer" containerID="394476f4de2eb5477b8f78919726c06de4a0720a783079391349dd5d72e44469" Mar 13 12:54:11.646136 master-0 kubenswrapper[19715]: I0313 12:54:11.646092 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:11.664685 master-0 kubenswrapper[19715]: I0313 12:54:11.664628 19715 scope.go:117] "RemoveContainer" containerID="9463824c4754254b1aa46bbaffa14766bfc6621f106bd61ec45a341f649f56be" Mar 13 12:54:11.684458 master-0 kubenswrapper[19715]: I0313 12:54:11.684427 19715 scope.go:117] "RemoveContainer" containerID="b5d41bd31df12d6bde4b88fd262f4bd668a988bb2fa111efac5ce9627109d651" Mar 13 12:54:11.725697 master-0 kubenswrapper[19715]: I0313 12:54:11.724772 19715 scope.go:117] "RemoveContainer" containerID="6ff41cac12178add1274d61c9909e4c536072763a364171a09e21a829bd3b5b7" Mar 13 12:54:11.759782 master-0 kubenswrapper[19715]: I0313 12:54:11.759379 19715 scope.go:117] "RemoveContainer" containerID="8673458808c732fe7f95b1d4faa7c40036a20d9b20d978c9e0314fa25bae2c05" Mar 13 12:54:11.776059 master-0 kubenswrapper[19715]: I0313 12:54:11.775800 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:11.776059 master-0 kubenswrapper[19715]: I0313 12:54:11.775914 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:54:11.777705 master-0 kubenswrapper[19715]: I0313 12:54:11.777115 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:54:11.777705 master-0 kubenswrapper[19715]: I0313 12:54:11.777162 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:54:12.321952 master-0 kubenswrapper[19715]: I0313 12:54:12.321803 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.360919 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.362786 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-thanos" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.362924 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-thanos" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.363007 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="init-config-reloader" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.363021 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="init-config-reloader" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.363041 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-web" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.363895 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-web" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.363910 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="prometheus" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.363918 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="prometheus" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.363964 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="config-reloader" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.363973 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="config-reloader" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.363989 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.363997 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: E0313 12:54:12.364012 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="thanos-sidecar" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364053 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="thanos-sidecar" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364323 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-web" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364378 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="thanos-sidecar" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364400 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="prometheus" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364411 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364459 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="kube-rbac-proxy-thanos" Mar 13 12:54:12.368092 master-0 kubenswrapper[19715]: I0313 12:54:12.364469 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" containerName="config-reloader" Mar 13 12:54:12.373760 master-0 kubenswrapper[19715]: I0313 12:54:12.371551 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.383656 master-0 kubenswrapper[19715]: I0313 12:54:12.382926 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:54:12.383656 master-0 kubenswrapper[19715]: I0313 12:54:12.383233 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:54:12.383656 master-0 kubenswrapper[19715]: I0313 12:54:12.383394 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zzprz" Mar 13 12:54:12.383656 master-0 kubenswrapper[19715]: I0313 12:54:12.383507 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:54:12.384173 master-0 kubenswrapper[19715]: I0313 12:54:12.383701 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-84k2dnesbumig" Mar 13 12:54:12.384173 master-0 kubenswrapper[19715]: I0313 12:54:12.383967 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:54:12.384276 master-0 kubenswrapper[19715]: I0313 12:54:12.384230 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:54:12.384391 master-0 kubenswrapper[19715]: I0313 12:54:12.384365 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:54:12.384512 master-0 kubenswrapper[19715]: I0313 12:54:12.384488 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:54:12.384670 master-0 kubenswrapper[19715]: I0313 12:54:12.384648 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:54:12.384788 master-0 kubenswrapper[19715]: I0313 12:54:12.384764 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:54:12.387062 master-0 kubenswrapper[19715]: I0313 12:54:12.387014 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:54:12.401611 master-0 kubenswrapper[19715]: I0313 12:54:12.399844 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:54:12.417390 master-0 kubenswrapper[19715]: I0313 12:54:12.412930 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:12.452237 master-0 kubenswrapper[19715]: I0313 12:54:12.452166 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452237 master-0 kubenswrapper[19715]: I0313 12:54:12.452223 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s5g4b\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-kube-api-access-s5g4b\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452237 master-0 kubenswrapper[19715]: I0313 12:54:12.452248 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-web-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452264 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452285 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452309 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452325 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452383 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452417 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452431 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452451 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452475 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452497 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452520 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452552 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-config-out\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452593 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452629 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.452787 master-0 kubenswrapper[19715]: I0313 12:54:12.452647 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.555876 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.555958 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556002 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556051 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556100 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556135 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556192 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-config-out\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556221 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556267 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556314 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556373 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556404 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s5g4b\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-kube-api-access-s5g4b\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556433 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-web-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556461 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556496 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556535 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556566 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.556660 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.559149 master-0 kubenswrapper[19715]: I0313 12:54:12.558658 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.561556 master-0 kubenswrapper[19715]: I0313 12:54:12.561462 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.566279 master-0 kubenswrapper[19715]: I0313 12:54:12.565619 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.566279 master-0 kubenswrapper[19715]: I0313 12:54:12.565727 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.566871 master-0 kubenswrapper[19715]: I0313 12:54:12.566721 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.570804 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.571192 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.571382 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.571611 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-web-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.571828 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.572837 master-0 kubenswrapper[19715]: I0313 12:54:12.572694 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.573508 master-0 kubenswrapper[19715]: I0313 12:54:12.573381 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.577404 master-0 kubenswrapper[19715]: I0313 12:54:12.576803 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/65129feb-d231-4e3f-84a0-e769ea0b0eef-config-out\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.583067 master-0 kubenswrapper[19715]: I0313 12:54:12.582954 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.584497 master-0 kubenswrapper[19715]: I0313 12:54:12.584449 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.586332 master-0 kubenswrapper[19715]: I0313 12:54:12.585551 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/65129feb-d231-4e3f-84a0-e769ea0b0eef-config\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.587463 master-0 kubenswrapper[19715]: I0313 12:54:12.587237 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/65129feb-d231-4e3f-84a0-e769ea0b0eef-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.592861 master-0 kubenswrapper[19715]: I0313 12:54:12.592208 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s5g4b\" (UniqueName: \"kubernetes.io/projected/65129feb-d231-4e3f-84a0-e769ea0b0eef-kube-api-access-s5g4b\") pod \"prometheus-k8s-0\" (UID: \"65129feb-d231-4e3f-84a0-e769ea0b0eef\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:12.728298 master-0 kubenswrapper[19715]: I0313 12:54:12.728218 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:13.233554 master-0 kubenswrapper[19715]: I0313 12:54:13.233324 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 12:54:13.624256 master-0 kubenswrapper[19715]: I0313 12:54:13.624096 19715 generic.go:334] "Generic (PLEG): container finished" podID="65129feb-d231-4e3f-84a0-e769ea0b0eef" containerID="c2e05aea615fd6c743d1c5b4d82f219e3da008b2bd077b261c3f9cf16badc2b7" exitCode=0 Mar 13 12:54:13.624256 master-0 kubenswrapper[19715]: I0313 12:54:13.624160 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerDied","Data":"c2e05aea615fd6c743d1c5b4d82f219e3da008b2bd077b261c3f9cf16badc2b7"} Mar 13 12:54:13.624256 master-0 kubenswrapper[19715]: I0313 12:54:13.624198 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"7914c1feeb5fb22c012bb2237a467b870fb932a6ce2f38b493637fb85c8a898b"} Mar 13 12:54:13.705966 master-0 kubenswrapper[19715]: I0313 12:54:13.705910 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50f9cfe2-048d-42c1-bd6c-30ab66b713d1" path="/var/lib/kubelet/pods/50f9cfe2-048d-42c1-bd6c-30ab66b713d1/volumes" Mar 13 12:54:14.634328 master-0 kubenswrapper[19715]: I0313 12:54:14.634265 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"ce1de8ea289f0cc1e4be2f53c5ce95c4d257ebd1153f6d5cffcd71d695ef6958"} Mar 13 12:54:14.635004 master-0 kubenswrapper[19715]: I0313 12:54:14.634983 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"bee438d7ef1b9cb47343a1df81824524cae649b2d470db32f104a40eefa0d0ae"} Mar 13 12:54:14.635133 master-0 kubenswrapper[19715]: I0313 12:54:14.635111 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"55be2ea23fa6afe8eb17503ef5db4be900cc688ac163847e80c3dc28fc148572"} Mar 13 12:54:14.635262 master-0 kubenswrapper[19715]: I0313 12:54:14.635240 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"b5ad14ed6d586f6d6859f0830b11c3f7f241a1724d6b233e0a1d2857eecc9e9e"} Mar 13 12:54:14.635382 master-0 kubenswrapper[19715]: I0313 12:54:14.635363 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"484d8b882d67ae8a15fdb3435c2c3d6348775fd9522541d810eb419dcf5c5054"} Mar 13 12:54:14.635464 master-0 kubenswrapper[19715]: I0313 12:54:14.635451 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"65129feb-d231-4e3f-84a0-e769ea0b0eef","Type":"ContainerStarted","Data":"ae3caba65b45b50c137e9187368582491c12a9516eac5110ac732b0f491a335c"} Mar 13 12:54:14.672248 master-0 kubenswrapper[19715]: I0313 12:54:14.672145 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=2.672105266 podStartE2EDuration="2.672105266s" podCreationTimestamp="2026-03-13 12:54:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:54:14.667689133 +0000 UTC m=+281.234361900" watchObservedRunningTime="2026-03-13 12:54:14.672105266 +0000 UTC m=+281.238778033" Mar 13 12:54:17.728653 master-0 kubenswrapper[19715]: I0313 12:54:17.728552 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:54:20.529909 master-0 kubenswrapper[19715]: I0313 12:54:20.529821 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:20.530737 master-0 kubenswrapper[19715]: I0313 12:54:20.529908 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:21.775559 master-0 kubenswrapper[19715]: I0313 12:54:21.775480 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:54:21.776143 master-0 kubenswrapper[19715]: I0313 12:54:21.775563 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:54:21.785996 master-0 kubenswrapper[19715]: I0313 12:54:21.785905 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:54:21.786973 master-0 kubenswrapper[19715]: I0313 12:54:21.786929 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:54:21.787076 master-0 kubenswrapper[19715]: I0313 12:54:21.787039 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:21.788352 master-0 kubenswrapper[19715]: I0313 12:54:21.788314 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:54:21.788779 master-0 kubenswrapper[19715]: E0313 12:54:21.788541 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 13 12:54:21.788779 master-0 kubenswrapper[19715]: I0313 12:54:21.788777 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: E0313 12:54:21.788808 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="setup" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: I0313 12:54:21.788817 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="setup" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: E0313 12:54:21.788828 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: I0313 12:54:21.788835 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: E0313 12:54:21.788852 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: I0313 12:54:21.788858 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: E0313 12:54:21.788869 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: I0313 12:54:21.788875 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: E0313 12:54:21.788889 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 13 12:54:21.788896 master-0 kubenswrapper[19715]: I0313 12:54:21.788895 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 13 12:54:21.789283 master-0 kubenswrapper[19715]: I0313 12:54:21.789044 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" Mar 13 12:54:21.789283 master-0 kubenswrapper[19715]: I0313 12:54:21.789057 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" Mar 13 12:54:21.789283 master-0 kubenswrapper[19715]: I0313 12:54:21.789068 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" Mar 13 12:54:21.789283 master-0 kubenswrapper[19715]: I0313 12:54:21.789082 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" Mar 13 12:54:21.789283 master-0 kubenswrapper[19715]: I0313 12:54:21.789092 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:54:21.832090 master-0 kubenswrapper[19715]: I0313 12:54:21.832012 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:54:21.906264 master-0 kubenswrapper[19715]: I0313 12:54:21.906187 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:21.906536 master-0 kubenswrapper[19715]: I0313 12:54:21.906498 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:21.906603 master-0 kubenswrapper[19715]: I0313 12:54:21.906563 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:21.906677 master-0 kubenswrapper[19715]: I0313 12:54:21.906656 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:21.906728 master-0 kubenswrapper[19715]: I0313 12:54:21.906703 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:21.906774 master-0 kubenswrapper[19715]: I0313 12:54:21.906748 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:21.906835 master-0 kubenswrapper[19715]: I0313 12:54:21.906810 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:21.906967 master-0 kubenswrapper[19715]: I0313 12:54:21.906932 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008282 master-0 kubenswrapper[19715]: I0313 12:54:22.008205 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008282 master-0 kubenswrapper[19715]: I0313 12:54:22.008278 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008324 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008364 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008440 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008466 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008501 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008639 master-0 kubenswrapper[19715]: I0313 12:54:22.008537 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008685 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008730 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008761 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008783 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008806 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008830 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008856 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:22.008947 master-0 kubenswrapper[19715]: I0313 12:54:22.008890 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.130345 master-0 kubenswrapper[19715]: I0313 12:54:22.130204 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:54:22.152607 master-0 kubenswrapper[19715]: W0313 12:54:22.152545 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-f6c979b2707165572c5bd15fa21a7def22edc32f72844072439e1cfe60c07a4e WatchSource:0}: Error finding container f6c979b2707165572c5bd15fa21a7def22edc32f72844072439e1cfe60c07a4e: Status 404 returned error can't find the container with id f6c979b2707165572c5bd15fa21a7def22edc32f72844072439e1cfe60c07a4e Mar 13 12:54:22.697231 master-0 kubenswrapper[19715]: I0313 12:54:22.697076 19715 generic.go:334] "Generic (PLEG): container finished" podID="139213ac-1249-40eb-853f-768a8c20f6cd" containerID="4cb40c8f5e350ae5b95cd6a7642b9e9dbc1392cf8857b0ffc73ed969ac802781" exitCode=0 Mar 13 12:54:22.697231 master-0 kubenswrapper[19715]: I0313 12:54:22.697146 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"139213ac-1249-40eb-853f-768a8c20f6cd","Type":"ContainerDied","Data":"4cb40c8f5e350ae5b95cd6a7642b9e9dbc1392cf8857b0ffc73ed969ac802781"} Mar 13 12:54:22.699135 master-0 kubenswrapper[19715]: I0313 12:54:22.699088 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver" containerID="cri-o://ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d" gracePeriod=15 Mar 13 12:54:22.700095 master-0 kubenswrapper[19715]: I0313 12:54:22.700054 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2"} Mar 13 12:54:22.700095 master-0 kubenswrapper[19715]: I0313 12:54:22.700087 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"f6c979b2707165572c5bd15fa21a7def22edc32f72844072439e1cfe60c07a4e"} Mar 13 12:54:22.700244 master-0 kubenswrapper[19715]: I0313 12:54:22.700153 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-check-endpoints" containerID="cri-o://804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7" gracePeriod=15 Mar 13 12:54:22.700244 master-0 kubenswrapper[19715]: I0313 12:54:22.700223 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8" gracePeriod=15 Mar 13 12:54:22.700375 master-0 kubenswrapper[19715]: I0313 12:54:22.700286 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b" gracePeriod=15 Mar 13 12:54:22.700375 master-0 kubenswrapper[19715]: I0313 12:54:22.700340 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="4c3280e9367536f782caf8bdc07edb85" containerName="kube-apiserver-cert-syncer" containerID="cri-o://abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda" gracePeriod=15 Mar 13 12:54:22.734636 master-0 kubenswrapper[19715]: I0313 12:54:22.731898 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="4c3280e9367536f782caf8bdc07edb85" podUID="077dd10388b9e3e48a07382126e86621" Mar 13 12:54:23.709472 master-0 kubenswrapper[19715]: I0313 12:54:23.709410 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 13 12:54:23.710237 master-0 kubenswrapper[19715]: I0313 12:54:23.710197 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7" exitCode=0 Mar 13 12:54:23.710237 master-0 kubenswrapper[19715]: I0313 12:54:23.710231 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8" exitCode=0 Mar 13 12:54:23.710378 master-0 kubenswrapper[19715]: I0313 12:54:23.710246 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b" exitCode=0 Mar 13 12:54:23.710378 master-0 kubenswrapper[19715]: I0313 12:54:23.710261 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda" exitCode=2 Mar 13 12:54:24.041622 master-0 kubenswrapper[19715]: I0313 12:54:24.041549 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:54:24.140893 master-0 kubenswrapper[19715]: I0313 12:54:24.140819 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir\") pod \"139213ac-1249-40eb-853f-768a8c20f6cd\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " Mar 13 12:54:24.141206 master-0 kubenswrapper[19715]: I0313 12:54:24.141073 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access\") pod \"139213ac-1249-40eb-853f-768a8c20f6cd\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " Mar 13 12:54:24.141206 master-0 kubenswrapper[19715]: I0313 12:54:24.141109 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock\") pod \"139213ac-1249-40eb-853f-768a8c20f6cd\" (UID: \"139213ac-1249-40eb-853f-768a8c20f6cd\") " Mar 13 12:54:24.141407 master-0 kubenswrapper[19715]: I0313 12:54:24.141349 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock" (OuterVolumeSpecName: "var-lock") pod "139213ac-1249-40eb-853f-768a8c20f6cd" (UID: "139213ac-1249-40eb-853f-768a8c20f6cd"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:24.141892 master-0 kubenswrapper[19715]: I0313 12:54:24.141103 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "139213ac-1249-40eb-853f-768a8c20f6cd" (UID: "139213ac-1249-40eb-853f-768a8c20f6cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:24.141956 master-0 kubenswrapper[19715]: I0313 12:54:24.141925 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:24.144678 master-0 kubenswrapper[19715]: I0313 12:54:24.144622 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "139213ac-1249-40eb-853f-768a8c20f6cd" (UID: "139213ac-1249-40eb-853f-768a8c20f6cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:54:24.243911 master-0 kubenswrapper[19715]: I0313 12:54:24.243751 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/139213ac-1249-40eb-853f-768a8c20f6cd-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:24.243911 master-0 kubenswrapper[19715]: I0313 12:54:24.243803 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/139213ac-1249-40eb-853f-768a8c20f6cd-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:24.718564 master-0 kubenswrapper[19715]: I0313 12:54:24.718473 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"139213ac-1249-40eb-853f-768a8c20f6cd","Type":"ContainerDied","Data":"6242c6d4326ab01ab0a1a9b6c0e5fb24a858803e12883f2c03d3eb4ef6c31761"} Mar 13 12:54:24.718564 master-0 kubenswrapper[19715]: I0313 12:54:24.718518 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 12:54:24.718564 master-0 kubenswrapper[19715]: I0313 12:54:24.718534 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6242c6d4326ab01ab0a1a9b6c0e5fb24a858803e12883f2c03d3eb4ef6c31761" Mar 13 12:54:25.126087 master-0 kubenswrapper[19715]: I0313 12:54:25.126033 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 13 12:54:25.127002 master-0 kubenswrapper[19715]: I0313 12:54:25.126963 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:25.266624 master-0 kubenswrapper[19715]: I0313 12:54:25.266395 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 13 12:54:25.266624 master-0 kubenswrapper[19715]: I0313 12:54:25.266479 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:25.266624 master-0 kubenswrapper[19715]: I0313 12:54:25.266535 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 13 12:54:25.266928 master-0 kubenswrapper[19715]: I0313 12:54:25.266674 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") pod \"4c3280e9367536f782caf8bdc07edb85\" (UID: \"4c3280e9367536f782caf8bdc07edb85\") " Mar 13 12:54:25.266928 master-0 kubenswrapper[19715]: I0313 12:54:25.266735 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:25.266928 master-0 kubenswrapper[19715]: I0313 12:54:25.266794 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "4c3280e9367536f782caf8bdc07edb85" (UID: "4c3280e9367536f782caf8bdc07edb85"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:54:25.267122 master-0 kubenswrapper[19715]: I0313 12:54:25.267077 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:25.267122 master-0 kubenswrapper[19715]: I0313 12:54:25.267113 19715 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:25.267122 master-0 kubenswrapper[19715]: I0313 12:54:25.267122 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/4c3280e9367536f782caf8bdc07edb85-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:54:25.708188 master-0 kubenswrapper[19715]: I0313 12:54:25.708119 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c3280e9367536f782caf8bdc07edb85" path="/var/lib/kubelet/pods/4c3280e9367536f782caf8bdc07edb85/volumes" Mar 13 12:54:25.718231 master-0 kubenswrapper[19715]: I0313 12:54:25.718142 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:54:25.718482 master-0 kubenswrapper[19715]: I0313 12:54:25.718238 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:54:25.718482 master-0 kubenswrapper[19715]: I0313 12:54:25.718307 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:54:25.719262 master-0 kubenswrapper[19715]: I0313 12:54:25.719204 19715 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed"} pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: I0313 12:54:25.719280 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" containerID="cri-o://5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed" gracePeriod=600 Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: E0313 12:54:25.719936 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-mlgxw.189c678a0233e160\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: &Event{ObjectMeta:{machine-config-daemon-mlgxw.189c678a0233e160 openshift-machine-config-operator 15510 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-mlgxw,UID:e8d83309-58b2-40af-ab48-1f8b9aeffefb,APIVersion:v1,ResourceVersion:10544,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: body: Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:49:55 +0000 UTC,LastTimestamp:2026-03-13 12:54:25.718210108 +0000 UTC m=+292.284882865,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 13 12:54:25.720093 master-0 kubenswrapper[19715]: > Mar 13 12:54:25.731817 master-0 kubenswrapper[19715]: I0313 12:54:25.731746 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_4c3280e9367536f782caf8bdc07edb85/kube-apiserver-cert-syncer/0.log" Mar 13 12:54:25.732476 master-0 kubenswrapper[19715]: I0313 12:54:25.732428 19715 generic.go:334] "Generic (PLEG): container finished" podID="4c3280e9367536f782caf8bdc07edb85" containerID="ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d" exitCode=0 Mar 13 12:54:25.733104 master-0 kubenswrapper[19715]: I0313 12:54:25.732500 19715 scope.go:117] "RemoveContainer" containerID="804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7" Mar 13 12:54:25.733104 master-0 kubenswrapper[19715]: I0313 12:54:25.732792 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:25.757269 master-0 kubenswrapper[19715]: I0313 12:54:25.754014 19715 scope.go:117] "RemoveContainer" containerID="a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8" Mar 13 12:54:25.794694 master-0 kubenswrapper[19715]: I0313 12:54:25.794345 19715 scope.go:117] "RemoveContainer" containerID="b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b" Mar 13 12:54:25.846318 master-0 kubenswrapper[19715]: I0313 12:54:25.846285 19715 scope.go:117] "RemoveContainer" containerID="abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda" Mar 13 12:54:25.888014 master-0 kubenswrapper[19715]: I0313 12:54:25.878496 19715 scope.go:117] "RemoveContainer" containerID="ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d" Mar 13 12:54:25.918130 master-0 kubenswrapper[19715]: I0313 12:54:25.917968 19715 scope.go:117] "RemoveContainer" containerID="626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac" Mar 13 12:54:25.953936 master-0 kubenswrapper[19715]: I0313 12:54:25.953801 19715 scope.go:117] "RemoveContainer" containerID="804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7" Mar 13 12:54:25.954687 master-0 kubenswrapper[19715]: E0313 12:54:25.954653 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7\": container with ID starting with 804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7 not found: ID does not exist" containerID="804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7" Mar 13 12:54:25.954758 master-0 kubenswrapper[19715]: I0313 12:54:25.954688 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7"} err="failed to get container status \"804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7\": rpc error: code = NotFound desc = could not find container \"804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7\": container with ID starting with 804ef28e14036f7303c5d6e6e45dcbb0cedf99139f36d1f57374ccf00d97cee7 not found: ID does not exist" Mar 13 12:54:25.954758 master-0 kubenswrapper[19715]: I0313 12:54:25.954711 19715 scope.go:117] "RemoveContainer" containerID="a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8" Mar 13 12:54:25.954939 master-0 kubenswrapper[19715]: E0313 12:54:25.954902 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8\": container with ID starting with a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8 not found: ID does not exist" containerID="a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8" Mar 13 12:54:25.954939 master-0 kubenswrapper[19715]: I0313 12:54:25.954928 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8"} err="failed to get container status \"a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8\": rpc error: code = NotFound desc = could not find container \"a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8\": container with ID starting with a3c58024f67ae180308a4e8a49853dc87bf86321406e281dc8b90652780cf7e8 not found: ID does not exist" Mar 13 12:54:25.955109 master-0 kubenswrapper[19715]: I0313 12:54:25.954944 19715 scope.go:117] "RemoveContainer" containerID="b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b" Mar 13 12:54:25.955194 master-0 kubenswrapper[19715]: E0313 12:54:25.955105 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b\": container with ID starting with b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b not found: ID does not exist" containerID="b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b" Mar 13 12:54:25.955194 master-0 kubenswrapper[19715]: I0313 12:54:25.955125 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b"} err="failed to get container status \"b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b\": rpc error: code = NotFound desc = could not find container \"b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b\": container with ID starting with b5660726f85acf73ec55d4ee2001ba993a3433abcdda90fa5a260e72e6b21a8b not found: ID does not exist" Mar 13 12:54:25.955194 master-0 kubenswrapper[19715]: I0313 12:54:25.955142 19715 scope.go:117] "RemoveContainer" containerID="abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda" Mar 13 12:54:25.955462 master-0 kubenswrapper[19715]: E0313 12:54:25.955365 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda\": container with ID starting with abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda not found: ID does not exist" containerID="abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda" Mar 13 12:54:25.955462 master-0 kubenswrapper[19715]: I0313 12:54:25.955381 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda"} err="failed to get container status \"abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda\": rpc error: code = NotFound desc = could not find container \"abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda\": container with ID starting with abcd6d5aa4955b21cb88f4db7c17fa3c7dda0c5db7df266b95a3cebcafda0eda not found: ID does not exist" Mar 13 12:54:25.955462 master-0 kubenswrapper[19715]: I0313 12:54:25.955393 19715 scope.go:117] "RemoveContainer" containerID="ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d" Mar 13 12:54:25.955717 master-0 kubenswrapper[19715]: E0313 12:54:25.955548 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d\": container with ID starting with ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d not found: ID does not exist" containerID="ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d" Mar 13 12:54:25.955717 master-0 kubenswrapper[19715]: I0313 12:54:25.955566 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d"} err="failed to get container status \"ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d\": rpc error: code = NotFound desc = could not find container \"ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d\": container with ID starting with ab17b12c5885b58961dc43d9987488abb5b6cf352a6b5c379b68434f84cc8d8d not found: ID does not exist" Mar 13 12:54:25.955717 master-0 kubenswrapper[19715]: I0313 12:54:25.955600 19715 scope.go:117] "RemoveContainer" containerID="626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac" Mar 13 12:54:25.955864 master-0 kubenswrapper[19715]: E0313 12:54:25.955754 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac\": container with ID starting with 626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac not found: ID does not exist" containerID="626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac" Mar 13 12:54:25.955864 master-0 kubenswrapper[19715]: I0313 12:54:25.955772 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac"} err="failed to get container status \"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac\": rpc error: code = NotFound desc = could not find container \"626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac\": container with ID starting with 626124191fdae204cfe9897416d9529fa50d0061135d603136ea0f1611f688ac not found: ID does not exist" Mar 13 12:54:26.744535 master-0 kubenswrapper[19715]: I0313 12:54:26.744442 19715 generic.go:334] "Generic (PLEG): container finished" podID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerID="5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed" exitCode=0 Mar 13 12:54:26.744535 master-0 kubenswrapper[19715]: I0313 12:54:26.744503 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerDied","Data":"5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed"} Mar 13 12:54:26.744535 master-0 kubenswrapper[19715]: I0313 12:54:26.744535 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"4372efee115bd956f110a1686f5b4492f3fae1a8246f84646b05662580e9f09a"} Mar 13 12:54:26.744535 master-0 kubenswrapper[19715]: I0313 12:54:26.744556 19715 scope.go:117] "RemoveContainer" containerID="059ba8cdf96cbfaa0c84868f9e73236a2a31a080a6c5d262ecec57fd9b950d4b" Mar 13 12:54:27.140528 master-0 kubenswrapper[19715]: E0313 12:54:27.140455 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.141554 master-0 kubenswrapper[19715]: E0313 12:54:27.141489 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.142308 master-0 kubenswrapper[19715]: E0313 12:54:27.142269 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.142842 master-0 kubenswrapper[19715]: E0313 12:54:27.142785 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.143415 master-0 kubenswrapper[19715]: E0313 12:54:27.143370 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.143514 master-0 kubenswrapper[19715]: I0313 12:54:27.143427 19715 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:54:27.143954 master-0 kubenswrapper[19715]: E0313 12:54:27.143918 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:54:27.345228 master-0 kubenswrapper[19715]: E0313 12:54:27.345129 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:54:27.743818 master-0 kubenswrapper[19715]: I0313 12:54:27.743638 19715 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c70cfe4-3ccc-4480-87da-c46a0ca720f1\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T12:54:21Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T12:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T12:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T12:54:21Z\\\",\\\"message\\\":\\\"containers with unready status: [startup-monitor]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-03-13T12:54:21Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"startup-monitor\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"manifests\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/secrets\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources/configmaps\\\",\\\"name\\\":\\\"pod-resource-dir\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lock\\\",\\\"name\\\":\\\"var-lock\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"var-log\\\"}]}],\\\"hostIP\\\":\\\"192.168.32.10\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"podIP\\\":\\\"192.168.32.10\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.32.10\\\"}],\\\"startTime\\\":\\\"2026-03-13T12:54:21Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-startup-monitor-master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0/status\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.744568 master-0 kubenswrapper[19715]: I0313 12:54:27.744524 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.745170 master-0 kubenswrapper[19715]: I0313 12:54:27.745140 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.745748 master-0 kubenswrapper[19715]: E0313 12:54:27.745718 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:54:27.745835 master-0 kubenswrapper[19715]: I0313 12:54:27.745809 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.750677 master-0 kubenswrapper[19715]: I0313 12:54:27.750570 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.751163 master-0 kubenswrapper[19715]: I0313 12:54:27.751090 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:27.751495 master-0 kubenswrapper[19715]: I0313 12:54:27.751452 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:28.548244 master-0 kubenswrapper[19715]: E0313 12:54:28.548145 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:54:30.150785 master-0 kubenswrapper[19715]: E0313 12:54:30.150504 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:54:30.529782 master-0 kubenswrapper[19715]: I0313 12:54:30.529622 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:30.529782 master-0 kubenswrapper[19715]: I0313 12:54:30.529698 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:31.774775 master-0 kubenswrapper[19715]: I0313 12:54:31.774695 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:54:31.774775 master-0 kubenswrapper[19715]: I0313 12:54:31.774763 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:54:32.766698 master-0 kubenswrapper[19715]: I0313 12:54:32.766630 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 12:54:32.767605 master-0 kubenswrapper[19715]: I0313 12:54:32.767505 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:32.768024 master-0 kubenswrapper[19715]: I0313 12:54:32.767981 19715 status_manager.go:851] "Failed to get status for pod" podUID="aa6a75ab47c06be4e74d05f552da4470" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:32.768505 master-0 kubenswrapper[19715]: I0313 12:54:32.768459 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:32.769148 master-0 kubenswrapper[19715]: I0313 12:54:32.769097 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:33.351872 master-0 kubenswrapper[19715]: E0313 12:54:33.351789 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:54:33.682093 master-0 kubenswrapper[19715]: I0313 12:54:33.681977 19715 kubelet.go:1505] "Image garbage collection succeeded" Mar 13 12:54:33.702701 master-0 kubenswrapper[19715]: I0313 12:54:33.702538 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:33.703340 master-0 kubenswrapper[19715]: I0313 12:54:33.703245 19715 status_manager.go:851] "Failed to get status for pod" podUID="aa6a75ab47c06be4e74d05f552da4470" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:33.704357 master-0 kubenswrapper[19715]: I0313 12:54:33.704283 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:33.704809 master-0 kubenswrapper[19715]: I0313 12:54:33.704759 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:34.642201 master-0 kubenswrapper[19715]: E0313 12:54:34.642018 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/events/machine-config-daemon-mlgxw.189c678a0233e160\": dial tcp 192.168.32.10:6443: connect: connection refused" event=< Mar 13 12:54:34.642201 master-0 kubenswrapper[19715]: &Event{ObjectMeta:{machine-config-daemon-mlgxw.189c678a0233e160 openshift-machine-config-operator 15510 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:machine-config-daemon-mlgxw,UID:e8d83309-58b2-40af-ab48-1f8b9aeffefb,APIVersion:v1,ResourceVersion:10544,FieldPath:spec.containers{machine-config-daemon},},Reason:ProbeError,Message:Liveness probe error: Get "http://127.0.0.1:8798/health": dial tcp 127.0.0.1:8798: connect: connection refused Mar 13 12:54:34.642201 master-0 kubenswrapper[19715]: body: Mar 13 12:54:34.642201 master-0 kubenswrapper[19715]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:49:55 +0000 UTC,LastTimestamp:2026-03-13 12:54:25.718210108 +0000 UTC m=+292.284882865,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 13 12:54:34.642201 master-0 kubenswrapper[19715]: > Mar 13 12:54:35.696316 master-0 kubenswrapper[19715]: I0313 12:54:35.696207 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:35.697994 master-0 kubenswrapper[19715]: I0313 12:54:35.697899 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:35.699228 master-0 kubenswrapper[19715]: I0313 12:54:35.698872 19715 status_manager.go:851] "Failed to get status for pod" podUID="aa6a75ab47c06be4e74d05f552da4470" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:35.699560 master-0 kubenswrapper[19715]: I0313 12:54:35.699445 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:35.700429 master-0 kubenswrapper[19715]: I0313 12:54:35.700359 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:35.715297 master-0 kubenswrapper[19715]: I0313 12:54:35.715240 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:35.715549 master-0 kubenswrapper[19715]: I0313 12:54:35.715530 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:35.716712 master-0 kubenswrapper[19715]: E0313 12:54:35.716649 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:35.717534 master-0 kubenswrapper[19715]: I0313 12:54:35.717510 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:35.741135 master-0 kubenswrapper[19715]: W0313 12:54:35.740218 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077dd10388b9e3e48a07382126e86621.slice/crio-383ebbbec82de5bf262b749d0beee39dd44f5d02a4cf6070dfe3a02f6afc1d4f WatchSource:0}: Error finding container 383ebbbec82de5bf262b749d0beee39dd44f5d02a4cf6070dfe3a02f6afc1d4f: Status 404 returned error can't find the container with id 383ebbbec82de5bf262b749d0beee39dd44f5d02a4cf6070dfe3a02f6afc1d4f Mar 13 12:54:35.827301 master-0 kubenswrapper[19715]: I0313 12:54:35.827224 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"383ebbbec82de5bf262b749d0beee39dd44f5d02a4cf6070dfe3a02f6afc1d4f"} Mar 13 12:54:36.836677 master-0 kubenswrapper[19715]: I0313 12:54:36.836617 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager/0.log" Mar 13 12:54:36.836677 master-0 kubenswrapper[19715]: I0313 12:54:36.836673 19715 generic.go:334] "Generic (PLEG): container finished" podID="801e0e0ab4a7a1c742dfa21c487f9cca" containerID="79d2112532eb814b4ddc9e964815bfcf0f82c0b3839cd7c9db7b085901b612ca" exitCode=1 Mar 13 12:54:36.837290 master-0 kubenswrapper[19715]: I0313 12:54:36.836721 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerDied","Data":"79d2112532eb814b4ddc9e964815bfcf0f82c0b3839cd7c9db7b085901b612ca"} Mar 13 12:54:36.837290 master-0 kubenswrapper[19715]: I0313 12:54:36.837225 19715 scope.go:117] "RemoveContainer" containerID="79d2112532eb814b4ddc9e964815bfcf0f82c0b3839cd7c9db7b085901b612ca" Mar 13 12:54:36.838671 master-0 kubenswrapper[19715]: I0313 12:54:36.838606 19715 status_manager.go:851] "Failed to get status for pod" podUID="aa6a75ab47c06be4e74d05f552da4470" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.839360 master-0 kubenswrapper[19715]: I0313 12:54:36.839323 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.840102 master-0 kubenswrapper[19715]: I0313 12:54:36.840057 19715 status_manager.go:851] "Failed to get status for pod" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.840210 master-0 kubenswrapper[19715]: I0313 12:54:36.840103 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc" exitCode=0 Mar 13 12:54:36.840210 master-0 kubenswrapper[19715]: I0313 12:54:36.840136 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc"} Mar 13 12:54:36.840416 master-0 kubenswrapper[19715]: I0313 12:54:36.840388 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:36.840416 master-0 kubenswrapper[19715]: I0313 12:54:36.840411 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:36.840992 master-0 kubenswrapper[19715]: E0313 12:54:36.840968 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:36.841181 master-0 kubenswrapper[19715]: I0313 12:54:36.841000 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.841954 master-0 kubenswrapper[19715]: I0313 12:54:36.841899 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.842810 master-0 kubenswrapper[19715]: I0313 12:54:36.842746 19715 status_manager.go:851] "Failed to get status for pod" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-config-operator/pods/machine-config-daemon-mlgxw\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.843461 master-0 kubenswrapper[19715]: I0313 12:54:36.843417 19715 status_manager.go:851] "Failed to get status for pod" podUID="aa6a75ab47c06be4e74d05f552da4470" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.844241 master-0 kubenswrapper[19715]: I0313 12:54:36.844137 19715 status_manager.go:851] "Failed to get status for pod" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.844905 master-0 kubenswrapper[19715]: I0313 12:54:36.844855 19715 status_manager.go:851] "Failed to get status for pod" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:36.845447 master-0 kubenswrapper[19715]: I0313 12:54:36.845411 19715 status_manager.go:851] "Failed to get status for pod" podUID="899242a15b2bdf3b4a04fb323647ca94" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-startup-monitor-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:54:37.861219 master-0 kubenswrapper[19715]: I0313 12:54:37.860360 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556"} Mar 13 12:54:37.861219 master-0 kubenswrapper[19715]: I0313 12:54:37.860426 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260"} Mar 13 12:54:37.861219 master-0 kubenswrapper[19715]: I0313 12:54:37.860437 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175"} Mar 13 12:54:37.869220 master-0 kubenswrapper[19715]: I0313 12:54:37.869123 19715 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="ac14ccdadcb85cedb903872ea2dbb40876363197cf4a91aa1d4403565a354eb1" exitCode=0 Mar 13 12:54:37.869423 master-0 kubenswrapper[19715]: I0313 12:54:37.869285 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"ac14ccdadcb85cedb903872ea2dbb40876363197cf4a91aa1d4403565a354eb1"} Mar 13 12:54:37.869423 master-0 kubenswrapper[19715]: I0313 12:54:37.869365 19715 scope.go:117] "RemoveContainer" containerID="94037d184139c388b62f88d584af05330086578d35ea58336f426f811ec331bf" Mar 13 12:54:37.870080 master-0 kubenswrapper[19715]: I0313 12:54:37.870040 19715 scope.go:117] "RemoveContainer" containerID="ac14ccdadcb85cedb903872ea2dbb40876363197cf4a91aa1d4403565a354eb1" Mar 13 12:54:37.901471 master-0 kubenswrapper[19715]: I0313 12:54:37.901414 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager/0.log" Mar 13 12:54:37.901635 master-0 kubenswrapper[19715]: I0313 12:54:37.901492 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"801e0e0ab4a7a1c742dfa21c487f9cca","Type":"ContainerStarted","Data":"91aba06d3555721ac7156a0d1fb3bcdde07eaa20c73d384ae32e60bb0e44531d"} Mar 13 12:54:38.919404 master-0 kubenswrapper[19715]: I0313 12:54:38.919304 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641"} Mar 13 12:54:38.919404 master-0 kubenswrapper[19715]: I0313 12:54:38.919374 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a"} Mar 13 12:54:38.920229 master-0 kubenswrapper[19715]: I0313 12:54:38.919480 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:38.920229 master-0 kubenswrapper[19715]: I0313 12:54:38.919504 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:38.920229 master-0 kubenswrapper[19715]: I0313 12:54:38.919525 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:38.931360 master-0 kubenswrapper[19715]: I0313 12:54:38.924213 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c"} Mar 13 12:54:39.938356 master-0 kubenswrapper[19715]: I0313 12:54:39.938254 19715 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c" exitCode=0 Mar 13 12:54:39.938356 master-0 kubenswrapper[19715]: I0313 12:54:39.938317 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c"} Mar 13 12:54:39.938356 master-0 kubenswrapper[19715]: I0313 12:54:39.938395 19715 scope.go:117] "RemoveContainer" containerID="ac14ccdadcb85cedb903872ea2dbb40876363197cf4a91aa1d4403565a354eb1" Mar 13 12:54:39.939508 master-0 kubenswrapper[19715]: I0313 12:54:39.939113 19715 scope.go:117] "RemoveContainer" containerID="d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c" Mar 13 12:54:39.939508 master-0 kubenswrapper[19715]: E0313 12:54:39.939394 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:54:40.530100 master-0 kubenswrapper[19715]: I0313 12:54:40.530016 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:40.530417 master-0 kubenswrapper[19715]: I0313 12:54:40.530101 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:40.718220 master-0 kubenswrapper[19715]: I0313 12:54:40.718138 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:40.718559 master-0 kubenswrapper[19715]: I0313 12:54:40.718373 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:40.726695 master-0 kubenswrapper[19715]: I0313 12:54:40.726562 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:41.775208 master-0 kubenswrapper[19715]: I0313 12:54:41.775114 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:54:41.775897 master-0 kubenswrapper[19715]: I0313 12:54:41.775221 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:54:43.936989 master-0 kubenswrapper[19715]: I0313 12:54:43.936917 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:43.969593 master-0 kubenswrapper[19715]: I0313 12:54:43.969517 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:43.969593 master-0 kubenswrapper[19715]: I0313 12:54:43.969552 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:43.973685 master-0 kubenswrapper[19715]: I0313 12:54:43.973645 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:54:43.975632 master-0 kubenswrapper[19715]: I0313 12:54:43.975599 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:54:44.742029 master-0 kubenswrapper[19715]: I0313 12:54:44.741888 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:44.742029 master-0 kubenswrapper[19715]: I0313 12:54:44.741967 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:44.746947 master-0 kubenswrapper[19715]: I0313 12:54:44.746899 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:44.975910 master-0 kubenswrapper[19715]: I0313 12:54:44.975862 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:44.975910 master-0 kubenswrapper[19715]: I0313 12:54:44.975897 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="83b6072d-700c-4af1-8363-292b25a36969" Mar 13 12:54:50.530017 master-0 kubenswrapper[19715]: I0313 12:54:50.529902 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:54:50.531354 master-0 kubenswrapper[19715]: I0313 12:54:50.530050 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:54:51.696748 master-0 kubenswrapper[19715]: I0313 12:54:51.696546 19715 scope.go:117] "RemoveContainer" containerID="d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c" Mar 13 12:54:51.775687 master-0 kubenswrapper[19715]: I0313 12:54:51.775607 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:54:51.775983 master-0 kubenswrapper[19715]: I0313 12:54:51.775682 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:54:52.031282 master-0 kubenswrapper[19715]: I0313 12:54:52.031136 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e"} Mar 13 12:54:53.040677 master-0 kubenswrapper[19715]: I0313 12:54:53.040550 19715 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e" exitCode=0 Mar 13 12:54:53.040677 master-0 kubenswrapper[19715]: I0313 12:54:53.040647 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e"} Mar 13 12:54:53.041364 master-0 kubenswrapper[19715]: I0313 12:54:53.040708 19715 scope.go:117] "RemoveContainer" containerID="d836ebee2b8df84db4a256efc0b33ff9876b617e21efe7c76ae0f18e03037d7c" Mar 13 12:54:53.041449 master-0 kubenswrapper[19715]: I0313 12:54:53.041412 19715 scope.go:117] "RemoveContainer" containerID="03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e" Mar 13 12:54:53.041767 master-0 kubenswrapper[19715]: E0313 12:54:53.041733 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:54:53.724750 master-0 kubenswrapper[19715]: I0313 12:54:53.724661 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:54:54.548601 master-0 kubenswrapper[19715]: I0313 12:54:54.548485 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:54:54.747123 master-0 kubenswrapper[19715]: I0313 12:54:54.747050 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:54:57.872412 master-0 kubenswrapper[19715]: I0313 12:54:57.872342 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:54:58.257868 master-0 kubenswrapper[19715]: I0313 12:54:58.257761 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:54:58.667567 master-0 kubenswrapper[19715]: I0313 12:54:58.667478 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:54:59.725785 master-0 kubenswrapper[19715]: I0313 12:54:59.725682 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:55:00.038256 master-0 kubenswrapper[19715]: I0313 12:55:00.038103 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:55:00.530450 master-0 kubenswrapper[19715]: I0313 12:55:00.530364 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:00.530941 master-0 kubenswrapper[19715]: I0313 12:55:00.530499 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:01.024604 master-0 kubenswrapper[19715]: I0313 12:55:01.024491 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:55:01.724081 master-0 kubenswrapper[19715]: I0313 12:55:01.724035 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-84k2dnesbumig" Mar 13 12:55:01.775234 master-0 kubenswrapper[19715]: I0313 12:55:01.775110 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:01.775524 master-0 kubenswrapper[19715]: I0313 12:55:01.775308 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:02.874862 master-0 kubenswrapper[19715]: I0313 12:55:02.874798 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zzprz" Mar 13 12:55:03.119435 master-0 kubenswrapper[19715]: I0313 12:55:03.119331 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:55:03.797698 master-0 kubenswrapper[19715]: I0313 12:55:03.797629 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:55:06.696887 master-0 kubenswrapper[19715]: I0313 12:55:06.696830 19715 scope.go:117] "RemoveContainer" containerID="03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e" Mar 13 12:55:06.697845 master-0 kubenswrapper[19715]: E0313 12:55:06.697812 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:55:07.311142 master-0 kubenswrapper[19715]: I0313 12:55:07.310798 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:55:08.889691 master-0 kubenswrapper[19715]: I0313 12:55:08.889620 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:55:10.530474 master-0 kubenswrapper[19715]: I0313 12:55:10.530252 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:10.530474 master-0 kubenswrapper[19715]: I0313 12:55:10.530339 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:11.776461 master-0 kubenswrapper[19715]: I0313 12:55:11.776336 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:11.777547 master-0 kubenswrapper[19715]: I0313 12:55:11.776471 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:12.728705 master-0 kubenswrapper[19715]: I0313 12:55:12.728627 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:55:12.761614 master-0 kubenswrapper[19715]: I0313 12:55:12.761498 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:55:13.238708 master-0 kubenswrapper[19715]: I0313 12:55:13.238627 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 12:55:19.696997 master-0 kubenswrapper[19715]: I0313 12:55:19.696924 19715 scope.go:117] "RemoveContainer" containerID="03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e" Mar 13 12:55:20.267093 master-0 kubenswrapper[19715]: I0313 12:55:20.267007 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170"} Mar 13 12:55:20.529674 master-0 kubenswrapper[19715]: I0313 12:55:20.529601 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:20.529674 master-0 kubenswrapper[19715]: I0313 12:55:20.529669 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:21.282721 master-0 kubenswrapper[19715]: I0313 12:55:21.282617 19715 generic.go:334] "Generic (PLEG): container finished" podID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" exitCode=0 Mar 13 12:55:21.282721 master-0 kubenswrapper[19715]: I0313 12:55:21.282629 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerDied","Data":"07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170"} Mar 13 12:55:21.282721 master-0 kubenswrapper[19715]: I0313 12:55:21.282755 19715 scope.go:117] "RemoveContainer" containerID="03b9c429428e111e247ef2b615a160111a24a9c3362b3af8e8175afc9eb0ad9e" Mar 13 12:55:21.284481 master-0 kubenswrapper[19715]: I0313 12:55:21.284440 19715 scope.go:117] "RemoveContainer" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" Mar 13 12:55:21.284807 master-0 kubenswrapper[19715]: E0313 12:55:21.284758 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:55:21.776076 master-0 kubenswrapper[19715]: I0313 12:55:21.775958 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:21.776076 master-0 kubenswrapper[19715]: I0313 12:55:21.776073 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:21.930065 master-0 kubenswrapper[19715]: I0313 12:55:21.929975 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:55:23.377370 master-0 kubenswrapper[19715]: I0313 12:55:23.376471 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:55:24.173739 master-0 kubenswrapper[19715]: I0313 12:55:24.173605 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-p8xg8" Mar 13 12:55:24.730479 master-0 kubenswrapper[19715]: I0313 12:55:24.730386 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 12:55:25.077560 master-0 kubenswrapper[19715]: I0313 12:55:25.077270 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:55:25.138622 master-0 kubenswrapper[19715]: I0313 12:55:25.138486 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:55:25.506654 master-0 kubenswrapper[19715]: I0313 12:55:25.506547 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:55:26.033851 master-0 kubenswrapper[19715]: I0313 12:55:26.033770 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:55:26.254123 master-0 kubenswrapper[19715]: I0313 12:55:26.253921 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-wsv7b" Mar 13 12:55:26.264160 master-0 kubenswrapper[19715]: I0313 12:55:26.264117 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:55:26.272136 master-0 kubenswrapper[19715]: I0313 12:55:26.272048 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:55:26.565527 master-0 kubenswrapper[19715]: I0313 12:55:26.565420 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:55:27.808409 master-0 kubenswrapper[19715]: I0313 12:55:27.808293 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:55:28.282518 master-0 kubenswrapper[19715]: I0313 12:55:28.282412 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:55:28.366392 master-0 kubenswrapper[19715]: I0313 12:55:28.366271 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:55:28.692332 master-0 kubenswrapper[19715]: I0313 12:55:28.692260 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:55:29.127728 master-0 kubenswrapper[19715]: I0313 12:55:29.127341 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:55:29.165726 master-0 kubenswrapper[19715]: I0313 12:55:29.165635 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:55:29.177617 master-0 kubenswrapper[19715]: I0313 12:55:29.177505 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:55:29.406660 master-0 kubenswrapper[19715]: I0313 12:55:29.406522 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7pbjup2gcsfqa" Mar 13 12:55:29.474617 master-0 kubenswrapper[19715]: I0313 12:55:29.474523 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:55:29.525852 master-0 kubenswrapper[19715]: I0313 12:55:29.525764 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 12:55:29.783004 master-0 kubenswrapper[19715]: I0313 12:55:29.782711 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:55:29.907938 master-0 kubenswrapper[19715]: I0313 12:55:29.907869 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 12:55:29.942137 master-0 kubenswrapper[19715]: I0313 12:55:29.942030 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:55:30.372062 master-0 kubenswrapper[19715]: I0313 12:55:30.371958 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:55:30.530162 master-0 kubenswrapper[19715]: I0313 12:55:30.530076 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:30.530446 master-0 kubenswrapper[19715]: I0313 12:55:30.530194 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:30.910594 master-0 kubenswrapper[19715]: I0313 12:55:30.910487 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:55:31.155168 master-0 kubenswrapper[19715]: I0313 12:55:31.155080 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:55:31.775282 master-0 kubenswrapper[19715]: I0313 12:55:31.775236 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:31.776318 master-0 kubenswrapper[19715]: I0313 12:55:31.775902 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:31.839822 master-0 kubenswrapper[19715]: I0313 12:55:31.839717 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:55:31.878551 master-0 kubenswrapper[19715]: I0313 12:55:31.878456 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:55:32.521162 master-0 kubenswrapper[19715]: I0313 12:55:32.521052 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:55:32.907318 master-0 kubenswrapper[19715]: I0313 12:55:32.907230 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:55:32.978288 master-0 kubenswrapper[19715]: I0313 12:55:32.978166 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:55:33.287238 master-0 kubenswrapper[19715]: I0313 12:55:33.286986 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:55:33.607034 master-0 kubenswrapper[19715]: I0313 12:55:33.606797 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 12:55:33.648754 master-0 kubenswrapper[19715]: I0313 12:55:33.648650 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-mr4r4" Mar 13 12:55:33.718743 master-0 kubenswrapper[19715]: I0313 12:55:33.718618 19715 scope.go:117] "RemoveContainer" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" Mar 13 12:55:33.719264 master-0 kubenswrapper[19715]: E0313 12:55:33.718974 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:55:33.918553 master-0 kubenswrapper[19715]: I0313 12:55:33.918447 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:55:34.101773 master-0 kubenswrapper[19715]: I0313 12:55:34.101607 19715 scope.go:117] "RemoveContainer" containerID="dd9e5e8e374c81e1c66f6e45811bee38c8f529d7dd83812725266a3311710c8f" Mar 13 12:55:34.217003 master-0 kubenswrapper[19715]: I0313 12:55:34.216657 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:55:34.308152 master-0 kubenswrapper[19715]: I0313 12:55:34.308086 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:55:34.313792 master-0 kubenswrapper[19715]: I0313 12:55:34.313724 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:55:34.600694 master-0 kubenswrapper[19715]: I0313 12:55:34.600330 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:55:34.712911 master-0 kubenswrapper[19715]: I0313 12:55:34.712788 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:55:34.791719 master-0 kubenswrapper[19715]: I0313 12:55:34.791607 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:55:34.836227 master-0 kubenswrapper[19715]: I0313 12:55:34.836117 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:55:34.962792 master-0 kubenswrapper[19715]: I0313 12:55:34.962697 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:55:35.326190 master-0 kubenswrapper[19715]: I0313 12:55:35.325971 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:55:35.408387 master-0 kubenswrapper[19715]: I0313 12:55:35.408209 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:55:35.554565 master-0 kubenswrapper[19715]: I0313 12:55:35.554469 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:55:36.041972 master-0 kubenswrapper[19715]: I0313 12:55:36.041884 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:55:36.437833 master-0 kubenswrapper[19715]: I0313 12:55:36.437756 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fbzjs" Mar 13 12:55:36.645705 master-0 kubenswrapper[19715]: I0313 12:55:36.645638 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:55:37.593806 master-0 kubenswrapper[19715]: I0313 12:55:37.593664 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:55:37.793467 master-0 kubenswrapper[19715]: I0313 12:55:37.793182 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:55:37.913769 master-0 kubenswrapper[19715]: I0313 12:55:37.913682 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 12:55:37.970772 master-0 kubenswrapper[19715]: I0313 12:55:37.970678 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:55:38.111509 master-0 kubenswrapper[19715]: I0313 12:55:38.111428 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:55:38.115627 master-0 kubenswrapper[19715]: I0313 12:55:38.115548 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:55:38.127311 master-0 kubenswrapper[19715]: I0313 12:55:38.127218 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7fzhf" Mar 13 12:55:38.173609 master-0 kubenswrapper[19715]: I0313 12:55:38.173337 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:55:38.242448 master-0 kubenswrapper[19715]: I0313 12:55:38.242342 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:55:38.318349 master-0 kubenswrapper[19715]: I0313 12:55:38.318278 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:55:38.451843 master-0 kubenswrapper[19715]: I0313 12:55:38.451535 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:55:38.726940 master-0 kubenswrapper[19715]: I0313 12:55:38.726708 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h5lt2" Mar 13 12:55:39.051210 master-0 kubenswrapper[19715]: I0313 12:55:39.051054 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:55:39.075189 master-0 kubenswrapper[19715]: I0313 12:55:39.075134 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:55:39.355728 master-0 kubenswrapper[19715]: I0313 12:55:39.355556 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 12:55:39.439412 master-0 kubenswrapper[19715]: I0313 12:55:39.439319 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:55:39.451131 master-0 kubenswrapper[19715]: I0313 12:55:39.451049 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-l78bb" Mar 13 12:55:39.497437 master-0 kubenswrapper[19715]: I0313 12:55:39.497346 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:55:39.545231 master-0 kubenswrapper[19715]: I0313 12:55:39.543223 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:55:39.688547 master-0 kubenswrapper[19715]: I0313 12:55:39.688477 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7gls2" Mar 13 12:55:39.997554 master-0 kubenswrapper[19715]: I0313 12:55:39.997299 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:55:40.010791 master-0 kubenswrapper[19715]: I0313 12:55:40.010717 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:55:40.274428 master-0 kubenswrapper[19715]: I0313 12:55:40.274179 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:55:40.530323 master-0 kubenswrapper[19715]: I0313 12:55:40.530176 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:40.531327 master-0 kubenswrapper[19715]: I0313 12:55:40.531106 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:40.581813 master-0 kubenswrapper[19715]: I0313 12:55:40.581724 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:55:40.955231 master-0 kubenswrapper[19715]: I0313 12:55:40.955151 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:55:41.192049 master-0 kubenswrapper[19715]: I0313 12:55:41.191921 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:55:41.440089 master-0 kubenswrapper[19715]: I0313 12:55:41.439980 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:55:41.551148 master-0 kubenswrapper[19715]: I0313 12:55:41.551087 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:55:41.775807 master-0 kubenswrapper[19715]: I0313 12:55:41.775541 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:41.775807 master-0 kubenswrapper[19715]: I0313 12:55:41.775684 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:41.784706 master-0 kubenswrapper[19715]: I0313 12:55:41.784410 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:55:41.830738 master-0 kubenswrapper[19715]: I0313 12:55:41.830650 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 12:55:42.425559 master-0 kubenswrapper[19715]: I0313 12:55:42.425480 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:55:42.732006 master-0 kubenswrapper[19715]: I0313 12:55:42.731548 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:55:42.764411 master-0 kubenswrapper[19715]: I0313 12:55:42.764337 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:55:42.857905 master-0 kubenswrapper[19715]: I0313 12:55:42.857810 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:55:43.008893 master-0 kubenswrapper[19715]: I0313 12:55:43.008632 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gft2f" Mar 13 12:55:43.008893 master-0 kubenswrapper[19715]: I0313 12:55:43.008675 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:55:43.091050 master-0 kubenswrapper[19715]: I0313 12:55:43.090935 19715 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:55:43.122659 master-0 kubenswrapper[19715]: I0313 12:55:43.122518 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:55:43.142502 master-0 kubenswrapper[19715]: I0313 12:55:43.142445 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:55:43.155052 master-0 kubenswrapper[19715]: I0313 12:55:43.155007 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:55:43.162485 master-0 kubenswrapper[19715]: I0313 12:55:43.162420 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:55:43.387183 master-0 kubenswrapper[19715]: I0313 12:55:43.387111 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 12:55:43.500178 master-0 kubenswrapper[19715]: I0313 12:55:43.500107 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:55:43.683180 master-0 kubenswrapper[19715]: I0313 12:55:43.682961 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:55:43.694176 master-0 kubenswrapper[19715]: I0313 12:55:43.694126 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:55:43.786227 master-0 kubenswrapper[19715]: I0313 12:55:43.786103 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 12:55:44.190054 master-0 kubenswrapper[19715]: I0313 12:55:44.189956 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:55:44.262969 master-0 kubenswrapper[19715]: I0313 12:55:44.262866 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:55:44.780923 master-0 kubenswrapper[19715]: I0313 12:55:44.780782 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:55:44.781860 master-0 kubenswrapper[19715]: I0313 12:55:44.781482 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:55:44.815167 master-0 kubenswrapper[19715]: I0313 12:55:44.815052 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:55:44.828822 master-0 kubenswrapper[19715]: I0313 12:55:44.828692 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-46jst" Mar 13 12:55:44.855359 master-0 kubenswrapper[19715]: I0313 12:55:44.855242 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:55:45.008996 master-0 kubenswrapper[19715]: I0313 12:55:45.008876 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dh2b7" Mar 13 12:55:45.160117 master-0 kubenswrapper[19715]: I0313 12:55:45.160016 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:55:45.174801 master-0 kubenswrapper[19715]: I0313 12:55:45.174687 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:55:45.187396 master-0 kubenswrapper[19715]: I0313 12:55:45.187304 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:55:45.364000 master-0 kubenswrapper[19715]: I0313 12:55:45.363925 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:55:45.370652 master-0 kubenswrapper[19715]: I0313 12:55:45.370599 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mjc6s" Mar 13 12:55:45.719913 master-0 kubenswrapper[19715]: I0313 12:55:45.719804 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:55:45.994500 master-0 kubenswrapper[19715]: I0313 12:55:45.994320 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:55:46.138685 master-0 kubenswrapper[19715]: I0313 12:55:46.138611 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:55:46.146232 master-0 kubenswrapper[19715]: I0313 12:55:46.146171 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ct6jh" Mar 13 12:55:46.189672 master-0 kubenswrapper[19715]: I0313 12:55:46.189532 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:55:46.193199 master-0 kubenswrapper[19715]: I0313 12:55:46.193145 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:55:46.262228 master-0 kubenswrapper[19715]: I0313 12:55:46.262056 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:55:46.352026 master-0 kubenswrapper[19715]: I0313 12:55:46.351956 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 12:55:46.489559 master-0 kubenswrapper[19715]: I0313 12:55:46.489511 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:55:46.490873 master-0 kubenswrapper[19715]: I0313 12:55:46.490835 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:55:46.547689 master-0 kubenswrapper[19715]: I0313 12:55:46.547450 19715 generic.go:334] "Generic (PLEG): container finished" podID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerID="82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1" exitCode=0 Mar 13 12:55:46.547689 master-0 kubenswrapper[19715]: I0313 12:55:46.547597 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerDied","Data":"82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1"} Mar 13 12:55:46.547986 master-0 kubenswrapper[19715]: I0313 12:55:46.547728 19715 scope.go:117] "RemoveContainer" containerID="78ae5f5f6dbecb618369b89512191ed3dcff14b5aecf6f0222631f845d48f587" Mar 13 12:55:46.548698 master-0 kubenswrapper[19715]: I0313 12:55:46.548639 19715 scope.go:117] "RemoveContainer" containerID="82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1" Mar 13 12:55:46.548970 master-0 kubenswrapper[19715]: E0313 12:55:46.548927 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-7wnld_openshift-marketplace(6e4e773c-d970-4f5e-9172-c1ebdb41888d)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" podUID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" Mar 13 12:55:46.680881 master-0 kubenswrapper[19715]: I0313 12:55:46.680783 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:55:46.696841 master-0 kubenswrapper[19715]: I0313 12:55:46.696788 19715 scope.go:117] "RemoveContainer" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" Mar 13 12:55:46.697112 master-0 kubenswrapper[19715]: E0313 12:55:46.697032 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:55:46.781658 master-0 kubenswrapper[19715]: I0313 12:55:46.781562 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:55:46.817714 master-0 kubenswrapper[19715]: I0313 12:55:46.817511 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:55:46.970330 master-0 kubenswrapper[19715]: I0313 12:55:46.970246 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:55:47.209104 master-0 kubenswrapper[19715]: I0313 12:55:47.208989 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:55:47.268938 master-0 kubenswrapper[19715]: I0313 12:55:47.268866 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 12:55:47.361129 master-0 kubenswrapper[19715]: I0313 12:55:47.361057 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:55:47.638902 master-0 kubenswrapper[19715]: I0313 12:55:47.638809 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:55:47.645518 master-0 kubenswrapper[19715]: I0313 12:55:47.645443 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:55:47.651823 master-0 kubenswrapper[19715]: I0313 12:55:47.651760 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:55:47.873144 master-0 kubenswrapper[19715]: I0313 12:55:47.873072 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 12:55:47.912375 master-0 kubenswrapper[19715]: I0313 12:55:47.912246 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:55:47.938059 master-0 kubenswrapper[19715]: I0313 12:55:47.937996 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 12:55:48.161714 master-0 kubenswrapper[19715]: I0313 12:55:48.161567 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 12:55:48.203203 master-0 kubenswrapper[19715]: I0313 12:55:48.203065 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-cjs56" Mar 13 12:55:48.211138 master-0 kubenswrapper[19715]: I0313 12:55:48.211064 19715 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:55:48.268554 master-0 kubenswrapper[19715]: I0313 12:55:48.268484 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:55:48.843224 master-0 kubenswrapper[19715]: I0313 12:55:48.843175 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:55:48.932788 master-0 kubenswrapper[19715]: I0313 12:55:48.932737 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:55:49.165568 master-0 kubenswrapper[19715]: I0313 12:55:49.165477 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:55:49.191839 master-0 kubenswrapper[19715]: I0313 12:55:49.191771 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:55:49.233152 master-0 kubenswrapper[19715]: I0313 12:55:49.233041 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-2tphk" Mar 13 12:55:49.248310 master-0 kubenswrapper[19715]: I0313 12:55:49.248232 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:55:49.271916 master-0 kubenswrapper[19715]: I0313 12:55:49.271864 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:55:49.302550 master-0 kubenswrapper[19715]: I0313 12:55:49.302469 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 12:55:49.383776 master-0 kubenswrapper[19715]: I0313 12:55:49.383679 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 12:55:49.546895 master-0 kubenswrapper[19715]: I0313 12:55:49.546604 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:55:49.647684 master-0 kubenswrapper[19715]: I0313 12:55:49.647608 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:55:49.666507 master-0 kubenswrapper[19715]: I0313 12:55:49.666430 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:55:49.688866 master-0 kubenswrapper[19715]: I0313 12:55:49.688779 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-g589p" Mar 13 12:55:49.710498 master-0 kubenswrapper[19715]: I0313 12:55:49.710410 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:55:49.843365 master-0 kubenswrapper[19715]: I0313 12:55:49.843214 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:55:49.871624 master-0 kubenswrapper[19715]: I0313 12:55:49.871545 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmg42" Mar 13 12:55:49.876367 master-0 kubenswrapper[19715]: I0313 12:55:49.876317 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 12:55:49.895680 master-0 kubenswrapper[19715]: I0313 12:55:49.894069 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:55:49.989960 master-0 kubenswrapper[19715]: I0313 12:55:49.989894 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:55:50.106090 master-0 kubenswrapper[19715]: I0313 12:55:50.105923 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-78fwj" Mar 13 12:55:50.129178 master-0 kubenswrapper[19715]: I0313 12:55:50.129046 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mwrx7" Mar 13 12:55:50.233360 master-0 kubenswrapper[19715]: I0313 12:55:50.233268 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jwq7f" Mar 13 12:55:50.258016 master-0 kubenswrapper[19715]: I0313 12:55:50.257939 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:55:50.530662 master-0 kubenswrapper[19715]: I0313 12:55:50.530533 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:55:50.530961 master-0 kubenswrapper[19715]: I0313 12:55:50.530686 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:55:50.824496 master-0 kubenswrapper[19715]: I0313 12:55:50.824343 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:55:51.010909 master-0 kubenswrapper[19715]: I0313 12:55:51.010763 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 12:55:51.017618 master-0 kubenswrapper[19715]: I0313 12:55:51.017496 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6slw7" Mar 13 12:55:51.030421 master-0 kubenswrapper[19715]: I0313 12:55:51.030320 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-comnvpv6eh6ml" Mar 13 12:55:51.083721 master-0 kubenswrapper[19715]: I0313 12:55:51.083332 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:55:51.128885 master-0 kubenswrapper[19715]: I0313 12:55:51.128790 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:55:51.160323 master-0 kubenswrapper[19715]: I0313 12:55:51.160259 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:55:51.195617 master-0 kubenswrapper[19715]: I0313 12:55:51.195504 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 12:55:51.370197 master-0 kubenswrapper[19715]: I0313 12:55:51.369944 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:55:51.490766 master-0 kubenswrapper[19715]: I0313 12:55:51.490299 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9n5pq" Mar 13 12:55:51.538951 master-0 kubenswrapper[19715]: I0313 12:55:51.538837 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:55:51.562447 master-0 kubenswrapper[19715]: I0313 12:55:51.562376 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:55:51.737208 master-0 kubenswrapper[19715]: I0313 12:55:51.737129 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 12:55:51.778782 master-0 kubenswrapper[19715]: I0313 12:55:51.778613 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:55:51.780699 master-0 kubenswrapper[19715]: I0313 12:55:51.778786 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:55:51.785181 master-0 kubenswrapper[19715]: I0313 12:55:51.784688 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:55:51.816629 master-0 kubenswrapper[19715]: I0313 12:55:51.816534 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:55:51.875799 master-0 kubenswrapper[19715]: I0313 12:55:51.875721 19715 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:55:51.882357 master-0 kubenswrapper[19715]: I0313 12:55:51.882213 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=90.882155605 podStartE2EDuration="1m30.882155605s" podCreationTimestamp="2026-03-13 12:54:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:54:43.716652599 +0000 UTC m=+310.283325376" watchObservedRunningTime="2026-03-13 12:55:51.882155605 +0000 UTC m=+378.448828392" Mar 13 12:55:51.887201 master-0 kubenswrapper[19715]: I0313 12:55:51.887127 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:55:51.887323 master-0 kubenswrapper[19715]: I0313 12:55:51.887242 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:55:51.909687 master-0 kubenswrapper[19715]: I0313 12:55:51.908093 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:55:51.922520 master-0 kubenswrapper[19715]: I0313 12:55:51.922391 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=68.922352762 podStartE2EDuration="1m8.922352762s" podCreationTimestamp="2026-03-13 12:54:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:55:51.919148148 +0000 UTC m=+378.485820905" watchObservedRunningTime="2026-03-13 12:55:51.922352762 +0000 UTC m=+378.489025519" Mar 13 12:55:51.927532 master-0 kubenswrapper[19715]: I0313 12:55:51.927475 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:55:52.196282 master-0 kubenswrapper[19715]: I0313 12:55:52.196204 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:55:52.229638 master-0 kubenswrapper[19715]: I0313 12:55:52.229554 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:55:52.267668 master-0 kubenswrapper[19715]: I0313 12:55:52.267478 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:55:52.267668 master-0 kubenswrapper[19715]: I0313 12:55:52.267694 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:55:52.269142 master-0 kubenswrapper[19715]: I0313 12:55:52.269092 19715 scope.go:117] "RemoveContainer" containerID="82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1" Mar 13 12:55:52.269689 master-0 kubenswrapper[19715]: E0313 12:55:52.269638 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"marketplace-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=marketplace-operator pod=marketplace-operator-64bf9778cb-7wnld_openshift-marketplace(6e4e773c-d970-4f5e-9172-c1ebdb41888d)\"" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" podUID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" Mar 13 12:55:52.327936 master-0 kubenswrapper[19715]: I0313 12:55:52.327851 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zllxz" Mar 13 12:55:52.361470 master-0 kubenswrapper[19715]: I0313 12:55:52.361396 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:55:52.434009 master-0 kubenswrapper[19715]: I0313 12:55:52.433936 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:55:52.435648 master-0 kubenswrapper[19715]: I0313 12:55:52.435610 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 12:55:52.485540 master-0 kubenswrapper[19715]: I0313 12:55:52.485336 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:55:52.533992 master-0 kubenswrapper[19715]: I0313 12:55:52.533839 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:55:52.555658 master-0 kubenswrapper[19715]: I0313 12:55:52.555526 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:55:52.745862 master-0 kubenswrapper[19715]: I0313 12:55:52.745689 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:55:52.891912 master-0 kubenswrapper[19715]: I0313 12:55:52.891802 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:55:52.895330 master-0 kubenswrapper[19715]: I0313 12:55:52.895286 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xg9t5" Mar 13 12:55:52.923660 master-0 kubenswrapper[19715]: I0313 12:55:52.923548 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-89sxl" Mar 13 12:55:52.966791 master-0 kubenswrapper[19715]: I0313 12:55:52.966693 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:55:53.129710 master-0 kubenswrapper[19715]: I0313 12:55:53.129561 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:55:53.231833 master-0 kubenswrapper[19715]: I0313 12:55:53.231757 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:55:53.439035 master-0 kubenswrapper[19715]: I0313 12:55:53.438942 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:55:53.617843 master-0 kubenswrapper[19715]: I0313 12:55:53.617608 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:55:53.667303 master-0 kubenswrapper[19715]: I0313 12:55:53.667199 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:55:53.685817 master-0 kubenswrapper[19715]: I0313 12:55:53.683941 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:55:53.799053 master-0 kubenswrapper[19715]: I0313 12:55:53.798843 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:55:53.814645 master-0 kubenswrapper[19715]: I0313 12:55:53.814218 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:55:53.844603 master-0 kubenswrapper[19715]: I0313 12:55:53.844482 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:55:53.955473 master-0 kubenswrapper[19715]: I0313 12:55:53.954399 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 12:55:54.028924 master-0 kubenswrapper[19715]: I0313 12:55:54.028821 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:55:54.074609 master-0 kubenswrapper[19715]: I0313 12:55:54.074372 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:55:54.163055 master-0 kubenswrapper[19715]: I0313 12:55:54.162939 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 12:55:54.436559 master-0 kubenswrapper[19715]: I0313 12:55:54.436455 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:55:54.492509 master-0 kubenswrapper[19715]: I0313 12:55:54.492413 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-8qlr6" Mar 13 12:55:54.953496 master-0 kubenswrapper[19715]: I0313 12:55:54.953386 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:55:55.022836 master-0 kubenswrapper[19715]: I0313 12:55:55.022740 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:55:55.032375 master-0 kubenswrapper[19715]: I0313 12:55:55.032282 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:55:55.156473 master-0 kubenswrapper[19715]: I0313 12:55:55.156372 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:55:55.212026 master-0 kubenswrapper[19715]: I0313 12:55:55.211852 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 12:55:55.227732 master-0 kubenswrapper[19715]: I0313 12:55:55.227658 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:55:55.273385 master-0 kubenswrapper[19715]: I0313 12:55:55.273323 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:55:55.273824 master-0 kubenswrapper[19715]: I0313 12:55:55.273789 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:55:55.512256 master-0 kubenswrapper[19715]: I0313 12:55:55.512044 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:55:55.535311 master-0 kubenswrapper[19715]: I0313 12:55:55.535248 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 12:55:55.552897 master-0 kubenswrapper[19715]: I0313 12:55:55.552843 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:55:55.754975 master-0 kubenswrapper[19715]: I0313 12:55:55.754865 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:55:55.779400 master-0 kubenswrapper[19715]: I0313 12:55:55.779208 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:55:55.880204 master-0 kubenswrapper[19715]: I0313 12:55:55.880091 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-5lcmq" Mar 13 12:55:56.045811 master-0 kubenswrapper[19715]: I0313 12:55:56.045467 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:55:56.280794 master-0 kubenswrapper[19715]: I0313 12:55:56.280692 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:55:56.335004 master-0 kubenswrapper[19715]: I0313 12:55:56.334796 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:55:56.400875 master-0 kubenswrapper[19715]: I0313 12:55:56.400716 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 12:55:56.430181 master-0 kubenswrapper[19715]: I0313 12:55:56.430096 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:55:56.567005 master-0 kubenswrapper[19715]: I0313 12:55:56.566911 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:55:56.699057 master-0 kubenswrapper[19715]: I0313 12:55:56.698950 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:55:56.891838 master-0 kubenswrapper[19715]: I0313 12:55:56.891673 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:55:57.012064 master-0 kubenswrapper[19715]: I0313 12:55:57.011867 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 12:55:57.063802 master-0 kubenswrapper[19715]: I0313 12:55:57.063738 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:55:57.136479 master-0 kubenswrapper[19715]: I0313 12:55:57.136350 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:55:57.198790 master-0 kubenswrapper[19715]: I0313 12:55:57.198694 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:55:57.213958 master-0 kubenswrapper[19715]: I0313 12:55:57.213918 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:55:57.568853 master-0 kubenswrapper[19715]: I0313 12:55:57.568765 19715 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:55:57.658762 master-0 kubenswrapper[19715]: I0313 12:55:57.658698 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 12:55:57.806057 master-0 kubenswrapper[19715]: I0313 12:55:57.805992 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:55:57.939555 master-0 kubenswrapper[19715]: I0313 12:55:57.939456 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kcbnp" Mar 13 12:55:58.297367 master-0 kubenswrapper[19715]: I0313 12:55:58.297181 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-gbnht" Mar 13 12:55:58.380074 master-0 kubenswrapper[19715]: I0313 12:55:58.380005 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:55:58.638448 master-0 kubenswrapper[19715]: I0313 12:55:58.638364 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:55:58.848427 master-0 kubenswrapper[19715]: I0313 12:55:58.848377 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:55:58.906172 master-0 kubenswrapper[19715]: I0313 12:55:58.906030 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 12:55:59.281342 master-0 kubenswrapper[19715]: I0313 12:55:59.281171 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:55:59.290382 master-0 kubenswrapper[19715]: I0313 12:55:59.290288 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 12:55:59.492811 master-0 kubenswrapper[19715]: I0313 12:55:59.492650 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:55:59.548013 master-0 kubenswrapper[19715]: I0313 12:55:59.547842 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 12:55:59.575168 master-0 kubenswrapper[19715]: I0313 12:55:59.575063 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:55:59.697697 master-0 kubenswrapper[19715]: I0313 12:55:59.697619 19715 scope.go:117] "RemoveContainer" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" Mar 13 12:55:59.698343 master-0 kubenswrapper[19715]: E0313 12:55:59.698134 19715 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"insights-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=insights-operator pod=insights-operator-8f89dfddd-s4gd8_openshift-insights(0ecab24a-cb8c-4171-9a04-c34d1d6d71c1)\"" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" podUID="0ecab24a-cb8c-4171-9a04-c34d1d6d71c1" Mar 13 12:55:59.760608 master-0 kubenswrapper[19715]: I0313 12:55:59.758096 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:55:59.841186 master-0 kubenswrapper[19715]: I0313 12:55:59.840986 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:55:59.980463 master-0 kubenswrapper[19715]: I0313 12:55:59.980371 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4f2vw" Mar 13 12:56:00.206150 master-0 kubenswrapper[19715]: I0313 12:56:00.206062 19715 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:56:00.530324 master-0 kubenswrapper[19715]: I0313 12:56:00.530158 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:56:00.530324 master-0 kubenswrapper[19715]: I0313 12:56:00.530233 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:56:00.581497 master-0 kubenswrapper[19715]: I0313 12:56:00.581398 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:56:00.800299 master-0 kubenswrapper[19715]: I0313 12:56:00.800072 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:56:00.942701 master-0 kubenswrapper[19715]: I0313 12:56:00.942604 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:56:01.098775 master-0 kubenswrapper[19715]: I0313 12:56:01.098272 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:56:01.377601 master-0 kubenswrapper[19715]: I0313 12:56:01.377445 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:56:01.452440 master-0 kubenswrapper[19715]: I0313 12:56:01.452389 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:56:01.500126 master-0 kubenswrapper[19715]: I0313 12:56:01.500058 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 12:56:01.502094 master-0 kubenswrapper[19715]: I0313 12:56:01.502065 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:56:01.523353 master-0 kubenswrapper[19715]: I0313 12:56:01.523288 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:56:01.524352 master-0 kubenswrapper[19715]: I0313 12:56:01.524275 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:56:01.974699 master-0 kubenswrapper[19715]: I0313 12:56:01.972265 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:56:01.974699 master-0 kubenswrapper[19715]: I0313 12:56:01.972493 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:01.974699 master-0 kubenswrapper[19715]: I0313 12:56:01.972545 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:01.978691 master-0 kubenswrapper[19715]: I0313 12:56:01.978146 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:56:02.020630 master-0 kubenswrapper[19715]: I0313 12:56:02.014315 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:56:02.031227 master-0 kubenswrapper[19715]: I0313 12:56:02.031042 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:56:02.486431 master-0 kubenswrapper[19715]: I0313 12:56:02.486347 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:56:02.868273 master-0 kubenswrapper[19715]: I0313 12:56:02.868210 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:56:02.928921 master-0 kubenswrapper[19715]: I0313 12:56:02.928809 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:56:02.933528 master-0 kubenswrapper[19715]: I0313 12:56:02.933472 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:56:02.944111 master-0 kubenswrapper[19715]: I0313 12:56:02.943994 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:56:02.994700 master-0 kubenswrapper[19715]: I0313 12:56:02.994428 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:02.996108 master-0 kubenswrapper[19715]: I0313 12:56:02.995921 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2" gracePeriod=5 Mar 13 12:56:03.059449 master-0 kubenswrapper[19715]: I0313 12:56:03.059372 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:56:03.458736 master-0 kubenswrapper[19715]: I0313 12:56:03.458608 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-lcdwj" Mar 13 12:56:04.119419 master-0 kubenswrapper[19715]: I0313 12:56:04.119323 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:56:04.216010 master-0 kubenswrapper[19715]: I0313 12:56:04.215923 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 12:56:04.435861 master-0 kubenswrapper[19715]: I0313 12:56:04.435759 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-qggps" Mar 13 12:56:04.669903 master-0 kubenswrapper[19715]: I0313 12:56:04.669806 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:56:04.670415 master-0 kubenswrapper[19715]: E0313 12:56:04.670231 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" containerName="installer" Mar 13 12:56:04.670415 master-0 kubenswrapper[19715]: I0313 12:56:04.670262 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" containerName="installer" Mar 13 12:56:04.670415 master-0 kubenswrapper[19715]: E0313 12:56:04.670285 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 12:56:04.670415 master-0 kubenswrapper[19715]: I0313 12:56:04.670291 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 12:56:04.670737 master-0 kubenswrapper[19715]: I0313 12:56:04.670470 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="139213ac-1249-40eb-853f-768a8c20f6cd" containerName="installer" Mar 13 12:56:04.670737 master-0 kubenswrapper[19715]: I0313 12:56:04.670516 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 12:56:04.671171 master-0 kubenswrapper[19715]: I0313 12:56:04.671127 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.675178 master-0 kubenswrapper[19715]: I0313 12:56:04.675079 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 12:56:04.675702 master-0 kubenswrapper[19715]: I0313 12:56:04.675089 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 12:56:04.675702 master-0 kubenswrapper[19715]: I0313 12:56:04.675460 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 12:56:04.676806 master-0 kubenswrapper[19715]: I0313 12:56:04.676757 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 12:56:04.677431 master-0 kubenswrapper[19715]: I0313 12:56:04.677391 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 12:56:04.679051 master-0 kubenswrapper[19715]: I0313 12:56:04.679007 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 12:56:04.679204 master-0 kubenswrapper[19715]: I0313 12:56:04.679172 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 12:56:04.679546 master-0 kubenswrapper[19715]: I0313 12:56:04.679499 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 12:56:04.679657 master-0 kubenswrapper[19715]: I0313 12:56:04.679509 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 12:56:04.684996 master-0 kubenswrapper[19715]: I0313 12:56:04.684934 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 12:56:04.685843 master-0 kubenswrapper[19715]: I0313 12:56:04.685711 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 12:56:04.686561 master-0 kubenswrapper[19715]: I0313 12:56:04.686284 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 12:56:04.694349 master-0 kubenswrapper[19715]: I0313 12:56:04.692742 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 12:56:04.704344 master-0 kubenswrapper[19715]: I0313 12:56:04.704259 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:56:04.729819 master-0 kubenswrapper[19715]: I0313 12:56:04.729725 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.729819 master-0 kubenswrapper[19715]: I0313 12:56:04.729816 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.729849 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.729896 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62f56\" (UniqueName: \"kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.729927 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.729988 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730012 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730064 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730106 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730177 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730215 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730275 master-0 kubenswrapper[19715]: I0313 12:56:04.730272 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.730800 master-0 kubenswrapper[19715]: I0313 12:56:04.730336 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.831711 master-0 kubenswrapper[19715]: I0313 12:56:04.831624 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.831711 master-0 kubenswrapper[19715]: I0313 12:56:04.831718 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831770 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831799 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831827 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831863 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831895 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831930 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831970 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.831997 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: E0313 12:56:04.831993 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.832055 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.832015 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: E0313 12:56:04.832169 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:05.332097617 +0000 UTC m=+391.898770374 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.832203 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-62f56\" (UniqueName: \"kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: I0313 12:56:04.832253 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: E0313 12:56:04.832335 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:04.832724 master-0 kubenswrapper[19715]: E0313 12:56:04.832547 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:05.33248636 +0000 UTC m=+391.899159117 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:04.836470 master-0 kubenswrapper[19715]: I0313 12:56:04.834302 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.836470 master-0 kubenswrapper[19715]: I0313 12:56:04.834429 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.836470 master-0 kubenswrapper[19715]: I0313 12:56:04.835659 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.838776 master-0 kubenswrapper[19715]: I0313 12:56:04.838694 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.838942 master-0 kubenswrapper[19715]: I0313 12:56:04.838890 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.839122 master-0 kubenswrapper[19715]: I0313 12:56:04.839038 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.841398 master-0 kubenswrapper[19715]: I0313 12:56:04.840684 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.841398 master-0 kubenswrapper[19715]: I0313 12:56:04.841346 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.849524 master-0 kubenswrapper[19715]: I0313 12:56:04.849192 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.867760 master-0 kubenswrapper[19715]: I0313 12:56:04.867688 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-62f56\" (UniqueName: \"kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:04.998808 master-0 kubenswrapper[19715]: I0313 12:56:04.998467 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-7t467" Mar 13 12:56:05.333901 master-0 kubenswrapper[19715]: I0313 12:56:05.333640 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:56:05.342519 master-0 kubenswrapper[19715]: I0313 12:56:05.342441 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:05.342691 master-0 kubenswrapper[19715]: I0313 12:56:05.342641 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:05.342691 master-0 kubenswrapper[19715]: E0313 12:56:05.342681 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:05.342837 master-0 kubenswrapper[19715]: E0313 12:56:05.342779 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:06.342753278 +0000 UTC m=+392.909426035 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:05.342910 master-0 kubenswrapper[19715]: E0313 12:56:05.342851 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:05.342983 master-0 kubenswrapper[19715]: E0313 12:56:05.342941 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:06.342910084 +0000 UTC m=+392.909582841 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:05.512645 master-0 kubenswrapper[19715]: I0313 12:56:05.512524 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:56:05.551277 master-0 kubenswrapper[19715]: I0313 12:56:05.551190 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:56:05.933253 master-0 kubenswrapper[19715]: I0313 12:56:05.932323 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:56:05.936363 master-0 kubenswrapper[19715]: I0313 12:56:05.936183 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 12:56:06.037128 master-0 kubenswrapper[19715]: I0313 12:56:06.037037 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:56:06.348406 master-0 kubenswrapper[19715]: I0313 12:56:06.348158 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:06.348406 master-0 kubenswrapper[19715]: I0313 12:56:06.348382 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:06.349845 master-0 kubenswrapper[19715]: E0313 12:56:06.348478 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:06.349845 master-0 kubenswrapper[19715]: E0313 12:56:06.348615 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:08.348563504 +0000 UTC m=+394.915236271 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:06.349845 master-0 kubenswrapper[19715]: E0313 12:56:06.348693 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:06.349845 master-0 kubenswrapper[19715]: E0313 12:56:06.348919 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:08.348881665 +0000 UTC m=+394.915554612 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:06.697989 master-0 kubenswrapper[19715]: I0313 12:56:06.697858 19715 scope.go:117] "RemoveContainer" containerID="82ec61ebd3cfad1166f1099232b3eab436011df1e1a88d79d59c944d88861af1" Mar 13 12:56:07.021083 master-0 kubenswrapper[19715]: I0313 12:56:07.020890 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xv4qd" Mar 13 12:56:07.035727 master-0 kubenswrapper[19715]: I0313 12:56:07.035619 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" event={"ID":"6e4e773c-d970-4f5e-9172-c1ebdb41888d","Type":"ContainerStarted","Data":"961998d4d755bce7b27c503f69f235ce9f1bc019ac85cebb1fe5660b4ce647a5"} Mar 13 12:56:07.036804 master-0 kubenswrapper[19715]: I0313 12:56:07.036739 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:56:07.040317 master-0 kubenswrapper[19715]: I0313 12:56:07.039931 19715 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-7wnld container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" start-of-body= Mar 13 12:56:07.040317 master-0 kubenswrapper[19715]: I0313 12:56:07.040017 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" podUID="6e4e773c-d970-4f5e-9172-c1ebdb41888d" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.14:8080/healthz\": dial tcp 10.128.0.14:8080: connect: connection refused" Mar 13 12:56:07.096207 master-0 kubenswrapper[19715]: I0313 12:56:07.096033 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:56:07.595109 master-0 kubenswrapper[19715]: I0313 12:56:07.595014 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:56:07.613970 master-0 kubenswrapper[19715]: I0313 12:56:07.613871 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:56:07.626042 master-0 kubenswrapper[19715]: I0313 12:56:07.625969 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:56:08.049328 master-0 kubenswrapper[19715]: I0313 12:56:08.049203 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-7wnld" Mar 13 12:56:08.120406 master-0 kubenswrapper[19715]: I0313 12:56:08.120094 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:56:08.421482 master-0 kubenswrapper[19715]: I0313 12:56:08.421367 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:08.421482 master-0 kubenswrapper[19715]: I0313 12:56:08.421477 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:08.422200 master-0 kubenswrapper[19715]: E0313 12:56:08.421551 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:08.422200 master-0 kubenswrapper[19715]: E0313 12:56:08.421710 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:08.422200 master-0 kubenswrapper[19715]: E0313 12:56:08.421733 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:12.421696589 +0000 UTC m=+398.988369346 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:08.422200 master-0 kubenswrapper[19715]: E0313 12:56:08.421781 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:12.421762752 +0000 UTC m=+398.988435509 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:08.578774 master-0 kubenswrapper[19715]: I0313 12:56:08.578695 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 12:56:08.579223 master-0 kubenswrapper[19715]: I0313 12:56:08.578832 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:08.608010 master-0 kubenswrapper[19715]: I0313 12:56:08.607919 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:56:08.728663 master-0 kubenswrapper[19715]: I0313 12:56:08.728385 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 12:56:08.728663 master-0 kubenswrapper[19715]: I0313 12:56:08.728559 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.728725 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.728795 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.728947 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.729066 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.729071 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.729085 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 12:56:08.729209 master-0 kubenswrapper[19715]: I0313 12:56:08.729117 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:08.729687 master-0 kubenswrapper[19715]: I0313 12:56:08.729660 19715 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:08.729687 master-0 kubenswrapper[19715]: I0313 12:56:08.729681 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:08.729820 master-0 kubenswrapper[19715]: I0313 12:56:08.729695 19715 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:08.729820 master-0 kubenswrapper[19715]: I0313 12:56:08.729707 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:08.740669 master-0 kubenswrapper[19715]: I0313 12:56:08.740524 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:08.832830 master-0 kubenswrapper[19715]: I0313 12:56:08.832702 19715 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:09.056806 master-0 kubenswrapper[19715]: I0313 12:56:09.056550 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 12:56:09.056806 master-0 kubenswrapper[19715]: I0313 12:56:09.056688 19715 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2" exitCode=137 Mar 13 12:56:09.057278 master-0 kubenswrapper[19715]: I0313 12:56:09.056882 19715 scope.go:117] "RemoveContainer" containerID="ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2" Mar 13 12:56:09.057334 master-0 kubenswrapper[19715]: I0313 12:56:09.057251 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:56:09.090725 master-0 kubenswrapper[19715]: I0313 12:56:09.090643 19715 scope.go:117] "RemoveContainer" containerID="ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2" Mar 13 12:56:09.091482 master-0 kubenswrapper[19715]: E0313 12:56:09.091402 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2\": container with ID starting with ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2 not found: ID does not exist" containerID="ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2" Mar 13 12:56:09.091614 master-0 kubenswrapper[19715]: I0313 12:56:09.091508 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2"} err="failed to get container status \"ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2\": rpc error: code = NotFound desc = could not find container \"ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2\": container with ID starting with ae8ef2b10ac29b7fb241d5fffe767faf0c01c7e0ac4b4eeb469d6852c2beccd2 not found: ID does not exist" Mar 13 12:56:09.236212 master-0 kubenswrapper[19715]: I0313 12:56:09.236124 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5s2w7" Mar 13 12:56:09.646463 master-0 kubenswrapper[19715]: I0313 12:56:09.646376 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:56:09.707320 master-0 kubenswrapper[19715]: I0313 12:56:09.707213 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 13 12:56:09.707740 master-0 kubenswrapper[19715]: I0313 12:56:09.707714 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 13 12:56:09.725466 master-0 kubenswrapper[19715]: I0313 12:56:09.725400 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:09.725790 master-0 kubenswrapper[19715]: I0313 12:56:09.725760 19715 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="0c70cfe4-3ccc-4480-87da-c46a0ca720f1" Mar 13 12:56:09.730438 master-0 kubenswrapper[19715]: I0313 12:56:09.730366 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:56:09.730438 master-0 kubenswrapper[19715]: I0313 12:56:09.730430 19715 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="0c70cfe4-3ccc-4480-87da-c46a0ca720f1" Mar 13 12:56:10.530616 master-0 kubenswrapper[19715]: I0313 12:56:10.530516 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:56:10.531164 master-0 kubenswrapper[19715]: I0313 12:56:10.530630 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:56:10.963120 master-0 kubenswrapper[19715]: I0313 12:56:10.963006 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8qwx8" Mar 13 12:56:11.413861 master-0 kubenswrapper[19715]: I0313 12:56:11.192841 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:56:11.427164 master-0 kubenswrapper[19715]: I0313 12:56:11.427086 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:56:11.775295 master-0 kubenswrapper[19715]: I0313 12:56:11.775081 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:11.775523 master-0 kubenswrapper[19715]: I0313 12:56:11.775291 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:11.892306 master-0 kubenswrapper[19715]: I0313 12:56:11.892224 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:56:12.089347 master-0 kubenswrapper[19715]: I0313 12:56:12.089191 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:56:12.503229 master-0 kubenswrapper[19715]: I0313 12:56:12.502834 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:12.503229 master-0 kubenswrapper[19715]: I0313 12:56:12.502989 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:12.503229 master-0 kubenswrapper[19715]: E0313 12:56:12.503116 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:12.503229 master-0 kubenswrapper[19715]: E0313 12:56:12.503134 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:12.503229 master-0 kubenswrapper[19715]: E0313 12:56:12.503269 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:20.503244663 +0000 UTC m=+407.069917420 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:12.504043 master-0 kubenswrapper[19715]: E0313 12:56:12.503286 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:20.503279644 +0000 UTC m=+407.069952401 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:13.700513 master-0 kubenswrapper[19715]: I0313 12:56:13.700436 19715 scope.go:117] "RemoveContainer" containerID="07e13748c833ce996ac1538194bac668886ff0e2f8f58bdd076d22864d7e0170" Mar 13 12:56:13.799782 master-0 kubenswrapper[19715]: I0313 12:56:13.799709 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:56:14.101469 master-0 kubenswrapper[19715]: I0313 12:56:14.101352 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-s4gd8" event={"ID":"0ecab24a-cb8c-4171-9a04-c34d1d6d71c1","Type":"ContainerStarted","Data":"33da15de7259b241e3ad421daed7a55e5491b58276cf16d7f0e21670356f2f16"} Mar 13 12:56:14.223764 master-0 kubenswrapper[19715]: I0313 12:56:14.223705 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:56:20.531190 master-0 kubenswrapper[19715]: I0313 12:56:20.531010 19715 patch_prober.go:28] interesting pod/console-b649d7df7-lm9xz container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" start-of-body= Mar 13 12:56:20.531190 master-0 kubenswrapper[19715]: I0313 12:56:20.531106 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" probeResult="failure" output="Get \"https://10.128.0.92:8443/health\": dial tcp 10.128.0.92:8443: connect: connection refused" Mar 13 12:56:20.535790 master-0 kubenswrapper[19715]: I0313 12:56:20.535737 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:20.536025 master-0 kubenswrapper[19715]: E0313 12:56:20.535954 19715 configmap.go:193] Couldn't get configMap openshift-authentication/v4-0-config-system-cliconfig: configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:20.536131 master-0 kubenswrapper[19715]: I0313 12:56:20.535993 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:20.536198 master-0 kubenswrapper[19715]: E0313 12:56:20.536096 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:36.536060827 +0000 UTC m=+423.102733584 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-cliconfig" (UniqueName: "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : configmap "v4-0-config-system-cliconfig" not found Mar 13 12:56:20.536262 master-0 kubenswrapper[19715]: E0313 12:56:20.536104 19715 secret.go:189] Couldn't get secret openshift-authentication/v4-0-config-system-session: secret "v4-0-config-system-session" not found Mar 13 12:56:20.536314 master-0 kubenswrapper[19715]: E0313 12:56:20.536279 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session podName:16eeeff8-7c53-4c3e-876a-cff0902955fd nodeName:}" failed. No retries permitted until 2026-03-13 12:56:36.536243433 +0000 UTC m=+423.102916190 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "v4-0-config-system-session" (UniqueName: "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session") pod "oauth-openshift-5fc8d565d6-jmhct" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd") : secret "v4-0-config-system-session" not found Mar 13 12:56:21.774913 master-0 kubenswrapper[19715]: I0313 12:56:21.774586 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:21.774913 master-0 kubenswrapper[19715]: I0313 12:56:21.774652 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:24.327326 master-0 kubenswrapper[19715]: I0313 12:56:24.327201 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:24.328004 master-0 kubenswrapper[19715]: I0313 12:56:24.327925 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="alertmanager" containerID="cri-o://9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" gracePeriod=120 Mar 13 12:56:24.328074 master-0 kubenswrapper[19715]: I0313 12:56:24.328042 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy" containerID="cri-o://5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" gracePeriod=120 Mar 13 12:56:24.328278 master-0 kubenswrapper[19715]: I0313 12:56:24.328079 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-metric" containerID="cri-o://de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" gracePeriod=120 Mar 13 12:56:24.328435 master-0 kubenswrapper[19715]: I0313 12:56:24.328251 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="config-reloader" containerID="cri-o://b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" gracePeriod=120 Mar 13 12:56:24.328507 master-0 kubenswrapper[19715]: I0313 12:56:24.328270 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="prom-label-proxy" containerID="cri-o://2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" gracePeriod=120 Mar 13 12:56:24.329315 master-0 kubenswrapper[19715]: I0313 12:56:24.329069 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-web" containerID="cri-o://192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" gracePeriod=120 Mar 13 12:56:25.230936 master-0 kubenswrapper[19715]: I0313 12:56:25.230815 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" exitCode=0 Mar 13 12:56:25.230936 master-0 kubenswrapper[19715]: I0313 12:56:25.230904 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" exitCode=0 Mar 13 12:56:25.230936 master-0 kubenswrapper[19715]: I0313 12:56:25.230892 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad"} Mar 13 12:56:25.231364 master-0 kubenswrapper[19715]: I0313 12:56:25.230957 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5"} Mar 13 12:56:25.231364 master-0 kubenswrapper[19715]: I0313 12:56:25.230971 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614"} Mar 13 12:56:25.231364 master-0 kubenswrapper[19715]: I0313 12:56:25.230917 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" exitCode=0 Mar 13 12:56:25.231364 master-0 kubenswrapper[19715]: I0313 12:56:25.230996 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" exitCode=0 Mar 13 12:56:25.231364 master-0 kubenswrapper[19715]: I0313 12:56:25.231020 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110"} Mar 13 12:56:25.711124 master-0 kubenswrapper[19715]: I0313 12:56:25.711066 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:25.738028 master-0 kubenswrapper[19715]: I0313 12:56:25.737956 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738028 master-0 kubenswrapper[19715]: I0313 12:56:25.738027 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738113 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9c82r\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738197 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738229 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738271 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738314 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738396 master-0 kubenswrapper[19715]: I0313 12:56:25.738365 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738768 master-0 kubenswrapper[19715]: I0313 12:56:25.738414 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738768 master-0 kubenswrapper[19715]: I0313 12:56:25.738443 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738768 master-0 kubenswrapper[19715]: I0313 12:56:25.738481 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738768 master-0 kubenswrapper[19715]: I0313 12:56:25.738514 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web\") pod \"fb537079-a878-4105-9055-7bc9d93a0333\" (UID: \"fb537079-a878-4105-9055-7bc9d93a0333\") " Mar 13 12:56:25.738768 master-0 kubenswrapper[19715]: I0313 12:56:25.738511 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:56:25.739223 master-0 kubenswrapper[19715]: I0313 12:56:25.739083 19715 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.741315 master-0 kubenswrapper[19715]: I0313 12:56:25.741207 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:56:25.743795 master-0 kubenswrapper[19715]: I0313 12:56:25.743180 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:25.743795 master-0 kubenswrapper[19715]: I0313 12:56:25.743753 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:25.748564 master-0 kubenswrapper[19715]: I0313 12:56:25.745858 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r" (OuterVolumeSpecName: "kube-api-access-9c82r") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "kube-api-access-9c82r". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:56:25.748564 master-0 kubenswrapper[19715]: I0313 12:56:25.746274 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.749535 master-0 kubenswrapper[19715]: I0313 12:56:25.749194 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume" (OuterVolumeSpecName: "config-volume") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.751656 master-0 kubenswrapper[19715]: I0313 12:56:25.751595 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.754644 master-0 kubenswrapper[19715]: I0313 12:56:25.753764 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.754644 master-0 kubenswrapper[19715]: I0313 12:56:25.753950 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out" (OuterVolumeSpecName: "config-out") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 12:56:25.754644 master-0 kubenswrapper[19715]: I0313 12:56:25.753963 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.799352 master-0 kubenswrapper[19715]: I0313 12:56:25.799193 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config" (OuterVolumeSpecName: "web-config") pod "fb537079-a878-4105-9055-7bc9d93a0333" (UID: "fb537079-a878-4105-9055-7bc9d93a0333"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:25.840405 master-0 kubenswrapper[19715]: I0313 12:56:25.840341 19715 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841052 master-0 kubenswrapper[19715]: I0313 12:56:25.840384 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9c82r\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-kube-api-access-9c82r\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841052 master-0 kubenswrapper[19715]: I0313 12:56:25.841048 19715 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/fb537079-a878-4105-9055-7bc9d93a0333-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841060 19715 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/fb537079-a878-4105-9055-7bc9d93a0333-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841072 19715 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841082 19715 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841093 19715 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841103 19715 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841113 19715 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841124 19715 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/fb537079-a878-4105-9055-7bc9d93a0333-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.841203 master-0 kubenswrapper[19715]: I0313 12:56:25.841134 19715 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/fb537079-a878-4105-9055-7bc9d93a0333-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:25.970002 master-0 kubenswrapper[19715]: I0313 12:56:25.969935 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b649d7df7-lm9xz"] Mar 13 12:56:26.010184 master-0 kubenswrapper[19715]: I0313 12:56:26.010100 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010493 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="prom-label-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010747 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="prom-label-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010787 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010797 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010814 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="init-config-reloader" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010824 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="init-config-reloader" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010839 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-web" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010849 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-web" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010860 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="alertmanager" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010867 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="alertmanager" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010876 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-metric" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010885 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-metric" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: E0313 12:56:26.010901 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="config-reloader" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.010909 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="config-reloader" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011137 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011197 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-metric" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011223 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="prom-label-proxy" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011237 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="kube-rbac-proxy-web" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011258 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="config-reloader" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.011272 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="fb537079-a878-4105-9055-7bc9d93a0333" containerName="alertmanager" Mar 13 12:56:26.012706 master-0 kubenswrapper[19715]: I0313 12:56:26.012015 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.029978 master-0 kubenswrapper[19715]: I0313 12:56:26.029912 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 12:56:26.044245 master-0 kubenswrapper[19715]: I0313 12:56:26.044175 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.044648 master-0 kubenswrapper[19715]: I0313 12:56:26.044618 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.044893 master-0 kubenswrapper[19715]: I0313 12:56:26.044868 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.045047 master-0 kubenswrapper[19715]: I0313 12:56:26.045025 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.045202 master-0 kubenswrapper[19715]: I0313 12:56:26.045178 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.045362 master-0 kubenswrapper[19715]: I0313 12:56:26.045341 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.045529 master-0 kubenswrapper[19715]: I0313 12:56:26.045503 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pswgb\" (UniqueName: \"kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147216 master-0 kubenswrapper[19715]: I0313 12:56:26.147145 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147216 master-0 kubenswrapper[19715]: I0313 12:56:26.147221 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147838 master-0 kubenswrapper[19715]: I0313 12:56:26.147320 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147838 master-0 kubenswrapper[19715]: I0313 12:56:26.147601 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147838 master-0 kubenswrapper[19715]: I0313 12:56:26.147668 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.147838 master-0 kubenswrapper[19715]: I0313 12:56:26.147724 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.148549 master-0 kubenswrapper[19715]: I0313 12:56:26.148146 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pswgb\" (UniqueName: \"kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.148652 master-0 kubenswrapper[19715]: I0313 12:56:26.148598 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.149082 master-0 kubenswrapper[19715]: I0313 12:56:26.149047 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.149319 master-0 kubenswrapper[19715]: I0313 12:56:26.149255 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.150275 master-0 kubenswrapper[19715]: I0313 12:56:26.150249 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.153385 master-0 kubenswrapper[19715]: I0313 12:56:26.153339 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.153705 master-0 kubenswrapper[19715]: I0313 12:56:26.153561 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.166639 master-0 kubenswrapper[19715]: I0313 12:56:26.166553 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pswgb\" (UniqueName: \"kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb\") pod \"console-79d876f4d6-kqmws\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.242399 master-0 kubenswrapper[19715]: I0313 12:56:26.242323 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" exitCode=0 Mar 13 12:56:26.242399 master-0 kubenswrapper[19715]: I0313 12:56:26.242382 19715 generic.go:334] "Generic (PLEG): container finished" podID="fb537079-a878-4105-9055-7bc9d93a0333" containerID="192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" exitCode=0 Mar 13 12:56:26.242741 master-0 kubenswrapper[19715]: I0313 12:56:26.242414 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52"} Mar 13 12:56:26.242741 master-0 kubenswrapper[19715]: I0313 12:56:26.242467 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41"} Mar 13 12:56:26.242741 master-0 kubenswrapper[19715]: I0313 12:56:26.242485 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"fb537079-a878-4105-9055-7bc9d93a0333","Type":"ContainerDied","Data":"98f06749f6de1d47d550220a1e1d42935e35af40efb8ae8fffcf76492a2cffa2"} Mar 13 12:56:26.242741 master-0 kubenswrapper[19715]: I0313 12:56:26.242535 19715 scope.go:117] "RemoveContainer" containerID="2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" Mar 13 12:56:26.243110 master-0 kubenswrapper[19715]: I0313 12:56:26.243074 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.266416 master-0 kubenswrapper[19715]: I0313 12:56:26.266318 19715 scope.go:117] "RemoveContainer" containerID="de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" Mar 13 12:56:26.290083 master-0 kubenswrapper[19715]: I0313 12:56:26.290010 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:26.295543 master-0 kubenswrapper[19715]: I0313 12:56:26.295464 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:26.300785 master-0 kubenswrapper[19715]: I0313 12:56:26.300730 19715 scope.go:117] "RemoveContainer" containerID="5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" Mar 13 12:56:26.320551 master-0 kubenswrapper[19715]: I0313 12:56:26.320473 19715 scope.go:117] "RemoveContainer" containerID="192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" Mar 13 12:56:26.334796 master-0 kubenswrapper[19715]: I0313 12:56:26.334735 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:26.340207 master-0 kubenswrapper[19715]: I0313 12:56:26.339190 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:26.343046 master-0 kubenswrapper[19715]: I0313 12:56:26.342990 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.357121 master-0 kubenswrapper[19715]: I0313 12:56:26.356999 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:56:26.357397 master-0 kubenswrapper[19715]: I0313 12:56:26.357300 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mjc6s" Mar 13 12:56:26.357559 master-0 kubenswrapper[19715]: I0313 12:56:26.357537 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 12:56:26.357716 master-0 kubenswrapper[19715]: I0313 12:56:26.357692 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:56:26.357836 master-0 kubenswrapper[19715]: I0313 12:56:26.357814 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:56:26.358006 master-0 kubenswrapper[19715]: I0313 12:56:26.357983 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:56:26.358146 master-0 kubenswrapper[19715]: I0313 12:56:26.358116 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:56:26.358267 master-0 kubenswrapper[19715]: I0313 12:56:26.358246 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:56:26.359954 master-0 kubenswrapper[19715]: I0313 12:56:26.359908 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.361356 master-0 kubenswrapper[19715]: I0313 12:56:26.361327 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.361610 master-0 kubenswrapper[19715]: I0313 12:56:26.361569 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.361767 master-0 kubenswrapper[19715]: I0313 12:56:26.361747 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.361889 master-0 kubenswrapper[19715]: I0313 12:56:26.361872 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-web-config\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362036 master-0 kubenswrapper[19715]: I0313 12:56:26.362016 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5n8b\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-kube-api-access-l5n8b\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362241 master-0 kubenswrapper[19715]: I0313 12:56:26.362222 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362437 master-0 kubenswrapper[19715]: I0313 12:56:26.362416 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-volume\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362601 master-0 kubenswrapper[19715]: I0313 12:56:26.362568 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-out\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362736 master-0 kubenswrapper[19715]: I0313 12:56:26.362718 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-tls-assets\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.362840 master-0 kubenswrapper[19715]: I0313 12:56:26.362823 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.414757 master-0 kubenswrapper[19715]: I0313 12:56:26.403447 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.414757 master-0 kubenswrapper[19715]: I0313 12:56:26.360191 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:56:26.414757 master-0 kubenswrapper[19715]: I0313 12:56:26.410020 19715 scope.go:117] "RemoveContainer" containerID="b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" Mar 13 12:56:26.473714 master-0 kubenswrapper[19715]: I0313 12:56:26.473665 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:26.479106 master-0 kubenswrapper[19715]: I0313 12:56:26.479066 19715 scope.go:117] "RemoveContainer" containerID="9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" Mar 13 12:56:26.499080 master-0 kubenswrapper[19715]: I0313 12:56:26.498950 19715 scope.go:117] "RemoveContainer" containerID="c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8" Mar 13 12:56:26.505037 master-0 kubenswrapper[19715]: I0313 12:56:26.504944 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.505155 master-0 kubenswrapper[19715]: I0313 12:56:26.505032 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.505155 master-0 kubenswrapper[19715]: I0313 12:56:26.505091 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-web-config\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.505155 master-0 kubenswrapper[19715]: I0313 12:56:26.505125 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l5n8b\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-kube-api-access-l5n8b\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505243 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505277 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-volume\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505338 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-out\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505370 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-tls-assets\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505416 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505486 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505560 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.506353 master-0 kubenswrapper[19715]: I0313 12:56:26.505611 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.512815 master-0 kubenswrapper[19715]: I0313 12:56:26.508404 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.512815 master-0 kubenswrapper[19715]: I0313 12:56:26.508797 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.512815 master-0 kubenswrapper[19715]: I0313 12:56:26.508808 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.514996 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.516083 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-out\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.516106 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-config-volume\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.519438 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.522421 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-tls-assets\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.524398 master-0 kubenswrapper[19715]: I0313 12:56:26.523781 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.527029 master-0 kubenswrapper[19715]: I0313 12:56:26.526976 19715 scope.go:117] "RemoveContainer" containerID="2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" Mar 13 12:56:26.527677 master-0 kubenswrapper[19715]: E0313 12:56:26.527559 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad\": container with ID starting with 2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad not found: ID does not exist" containerID="2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" Mar 13 12:56:26.527839 master-0 kubenswrapper[19715]: I0313 12:56:26.527687 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad"} err="failed to get container status \"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad\": rpc error: code = NotFound desc = could not find container \"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad\": container with ID starting with 2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad not found: ID does not exist" Mar 13 12:56:26.527839 master-0 kubenswrapper[19715]: I0313 12:56:26.527726 19715 scope.go:117] "RemoveContainer" containerID="de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" Mar 13 12:56:26.528452 master-0 kubenswrapper[19715]: I0313 12:56:26.528398 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.528661 master-0 kubenswrapper[19715]: I0313 12:56:26.528598 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5n8b\" (UniqueName: \"kubernetes.io/projected/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-kube-api-access-l5n8b\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.530147 master-0 kubenswrapper[19715]: E0313 12:56:26.530102 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52\": container with ID starting with de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52 not found: ID does not exist" containerID="de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" Mar 13 12:56:26.530282 master-0 kubenswrapper[19715]: I0313 12:56:26.530147 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52"} err="failed to get container status \"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52\": rpc error: code = NotFound desc = could not find container \"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52\": container with ID starting with de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52 not found: ID does not exist" Mar 13 12:56:26.530282 master-0 kubenswrapper[19715]: I0313 12:56:26.530179 19715 scope.go:117] "RemoveContainer" containerID="5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" Mar 13 12:56:26.531063 master-0 kubenswrapper[19715]: E0313 12:56:26.530936 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5\": container with ID starting with 5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5 not found: ID does not exist" containerID="5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" Mar 13 12:56:26.531063 master-0 kubenswrapper[19715]: I0313 12:56:26.530970 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5"} err="failed to get container status \"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5\": rpc error: code = NotFound desc = could not find container \"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5\": container with ID starting with 5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5 not found: ID does not exist" Mar 13 12:56:26.531063 master-0 kubenswrapper[19715]: I0313 12:56:26.530990 19715 scope.go:117] "RemoveContainer" containerID="192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" Mar 13 12:56:26.531820 master-0 kubenswrapper[19715]: E0313 12:56:26.531782 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41\": container with ID starting with 192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41 not found: ID does not exist" containerID="192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" Mar 13 12:56:26.531906 master-0 kubenswrapper[19715]: I0313 12:56:26.531824 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41"} err="failed to get container status \"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41\": rpc error: code = NotFound desc = could not find container \"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41\": container with ID starting with 192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41 not found: ID does not exist" Mar 13 12:56:26.531906 master-0 kubenswrapper[19715]: I0313 12:56:26.531848 19715 scope.go:117] "RemoveContainer" containerID="b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" Mar 13 12:56:26.532430 master-0 kubenswrapper[19715]: E0313 12:56:26.532390 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614\": container with ID starting with b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614 not found: ID does not exist" containerID="b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" Mar 13 12:56:26.532430 master-0 kubenswrapper[19715]: I0313 12:56:26.532419 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614"} err="failed to get container status \"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614\": rpc error: code = NotFound desc = could not find container \"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614\": container with ID starting with b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614 not found: ID does not exist" Mar 13 12:56:26.532697 master-0 kubenswrapper[19715]: I0313 12:56:26.532439 19715 scope.go:117] "RemoveContainer" containerID="9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" Mar 13 12:56:26.532923 master-0 kubenswrapper[19715]: E0313 12:56:26.532830 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110\": container with ID starting with 9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110 not found: ID does not exist" containerID="9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" Mar 13 12:56:26.533020 master-0 kubenswrapper[19715]: I0313 12:56:26.532917 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110"} err="failed to get container status \"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110\": rpc error: code = NotFound desc = could not find container \"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110\": container with ID starting with 9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110 not found: ID does not exist" Mar 13 12:56:26.533020 master-0 kubenswrapper[19715]: I0313 12:56:26.532938 19715 scope.go:117] "RemoveContainer" containerID="c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8" Mar 13 12:56:26.533440 master-0 kubenswrapper[19715]: E0313 12:56:26.533324 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8\": container with ID starting with c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8 not found: ID does not exist" containerID="c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8" Mar 13 12:56:26.533440 master-0 kubenswrapper[19715]: I0313 12:56:26.533354 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8"} err="failed to get container status \"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8\": rpc error: code = NotFound desc = could not find container \"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8\": container with ID starting with c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8 not found: ID does not exist" Mar 13 12:56:26.533440 master-0 kubenswrapper[19715]: I0313 12:56:26.533373 19715 scope.go:117] "RemoveContainer" containerID="2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad" Mar 13 12:56:26.533912 master-0 kubenswrapper[19715]: I0313 12:56:26.533791 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad"} err="failed to get container status \"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad\": rpc error: code = NotFound desc = could not find container \"2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad\": container with ID starting with 2fdc33c67656d38917448fd09fcb2fdc85017dd6678cd0370d5c1f15725cc5ad not found: ID does not exist" Mar 13 12:56:26.533912 master-0 kubenswrapper[19715]: I0313 12:56:26.533821 19715 scope.go:117] "RemoveContainer" containerID="de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52" Mar 13 12:56:26.534280 master-0 kubenswrapper[19715]: I0313 12:56:26.534153 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52"} err="failed to get container status \"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52\": rpc error: code = NotFound desc = could not find container \"de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52\": container with ID starting with de89651391cd69068acb2f13e8272bc93227e2c8bcf9df2803cdbde29bd1bb52 not found: ID does not exist" Mar 13 12:56:26.534280 master-0 kubenswrapper[19715]: I0313 12:56:26.534178 19715 scope.go:117] "RemoveContainer" containerID="5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5" Mar 13 12:56:26.534701 master-0 kubenswrapper[19715]: I0313 12:56:26.534598 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5"} err="failed to get container status \"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5\": rpc error: code = NotFound desc = could not find container \"5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5\": container with ID starting with 5e82665df41138d7d8f2187cca2613d0bdb4f1024c4bcf343eaf9a0d7740cfd5 not found: ID does not exist" Mar 13 12:56:26.534701 master-0 kubenswrapper[19715]: I0313 12:56:26.534627 19715 scope.go:117] "RemoveContainer" containerID="192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41" Mar 13 12:56:26.535143 master-0 kubenswrapper[19715]: I0313 12:56:26.535038 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41"} err="failed to get container status \"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41\": rpc error: code = NotFound desc = could not find container \"192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41\": container with ID starting with 192f645e880bc51eab306c0cdeb9ddd0ad3c7835583602358b4c4e088e1f1e41 not found: ID does not exist" Mar 13 12:56:26.535143 master-0 kubenswrapper[19715]: I0313 12:56:26.535067 19715 scope.go:117] "RemoveContainer" containerID="b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614" Mar 13 12:56:26.535412 master-0 kubenswrapper[19715]: I0313 12:56:26.535317 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614"} err="failed to get container status \"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614\": rpc error: code = NotFound desc = could not find container \"b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614\": container with ID starting with b6c85c764321c0d1690ea31254cd38edbb6d49eef48ec27ed50b1929632aa614 not found: ID does not exist" Mar 13 12:56:26.535412 master-0 kubenswrapper[19715]: I0313 12:56:26.535344 19715 scope.go:117] "RemoveContainer" containerID="9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110" Mar 13 12:56:26.535846 master-0 kubenswrapper[19715]: I0313 12:56:26.535642 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110"} err="failed to get container status \"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110\": rpc error: code = NotFound desc = could not find container \"9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110\": container with ID starting with 9ffc74d0dcf04edfc55afe69534c4da18f724ce3c97b5dfb8e458ff1a01e4110 not found: ID does not exist" Mar 13 12:56:26.535846 master-0 kubenswrapper[19715]: I0313 12:56:26.535712 19715 scope.go:117] "RemoveContainer" containerID="c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8" Mar 13 12:56:26.536113 master-0 kubenswrapper[19715]: I0313 12:56:26.536077 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8"} err="failed to get container status \"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8\": rpc error: code = NotFound desc = could not find container \"c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8\": container with ID starting with c6196510c7939f011ac84b583b1e2797a81db05508e7b180ebdde4bd3f2472a8 not found: ID does not exist" Mar 13 12:56:26.536316 master-0 kubenswrapper[19715]: I0313 12:56:26.536259 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/31f6a3b3-4e57-48bd-b40e-308ba4a2cd90-web-config\") pod \"alertmanager-main-0\" (UID: \"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.767853 master-0 kubenswrapper[19715]: I0313 12:56:26.767454 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 12:56:26.845514 master-0 kubenswrapper[19715]: I0313 12:56:26.845441 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 12:56:26.853681 master-0 kubenswrapper[19715]: W0313 12:56:26.853613 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod705af152_5524_4500_b326_80cc4ee76bee.slice/crio-bf938663ffed3689f7098ea0a408a753e54981e2ca85c1c01cd91ec7fb9d341f WatchSource:0}: Error finding container bf938663ffed3689f7098ea0a408a753e54981e2ca85c1c01cd91ec7fb9d341f: Status 404 returned error can't find the container with id bf938663ffed3689f7098ea0a408a753e54981e2ca85c1c01cd91ec7fb9d341f Mar 13 12:56:27.183490 master-0 kubenswrapper[19715]: I0313 12:56:27.183438 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 12:56:27.189438 master-0 kubenswrapper[19715]: W0313 12:56:27.189362 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod31f6a3b3_4e57_48bd_b40e_308ba4a2cd90.slice/crio-b112fc94046785a4ca0dcca7a3939c65f2ede742aaa69e38d90b9f0dd2b02ce3 WatchSource:0}: Error finding container b112fc94046785a4ca0dcca7a3939c65f2ede742aaa69e38d90b9f0dd2b02ce3: Status 404 returned error can't find the container with id b112fc94046785a4ca0dcca7a3939c65f2ede742aaa69e38d90b9f0dd2b02ce3 Mar 13 12:56:27.250672 master-0 kubenswrapper[19715]: I0313 12:56:27.250615 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"b112fc94046785a4ca0dcca7a3939c65f2ede742aaa69e38d90b9f0dd2b02ce3"} Mar 13 12:56:27.252294 master-0 kubenswrapper[19715]: I0313 12:56:27.252251 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d876f4d6-kqmws" event={"ID":"705af152-5524-4500-b326-80cc4ee76bee","Type":"ContainerStarted","Data":"f68e53724e2966b52f406822088d5de5ec83b1cc4ea10d74c2419f8367d009e2"} Mar 13 12:56:27.252378 master-0 kubenswrapper[19715]: I0313 12:56:27.252293 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d876f4d6-kqmws" event={"ID":"705af152-5524-4500-b326-80cc4ee76bee","Type":"ContainerStarted","Data":"bf938663ffed3689f7098ea0a408a753e54981e2ca85c1c01cd91ec7fb9d341f"} Mar 13 12:56:27.278459 master-0 kubenswrapper[19715]: I0313 12:56:27.278371 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-79d876f4d6-kqmws" podStartSLOduration=2.278349528 podStartE2EDuration="2.278349528s" podCreationTimestamp="2026-03-13 12:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:27.27720133 +0000 UTC m=+413.843874107" watchObservedRunningTime="2026-03-13 12:56:27.278349528 +0000 UTC m=+413.845022285" Mar 13 12:56:27.707290 master-0 kubenswrapper[19715]: I0313 12:56:27.707207 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb537079-a878-4105-9055-7bc9d93a0333" path="/var/lib/kubelet/pods/fb537079-a878-4105-9055-7bc9d93a0333/volumes" Mar 13 12:56:28.262539 master-0 kubenswrapper[19715]: I0313 12:56:28.262476 19715 generic.go:334] "Generic (PLEG): container finished" podID="31f6a3b3-4e57-48bd-b40e-308ba4a2cd90" containerID="f4cb30cbcb404d219117fea7db5452e8ff58cf074db4f542b4c22b8b60dac5ef" exitCode=0 Mar 13 12:56:28.263275 master-0 kubenswrapper[19715]: I0313 12:56:28.262545 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerDied","Data":"f4cb30cbcb404d219117fea7db5452e8ff58cf074db4f542b4c22b8b60dac5ef"} Mar 13 12:56:29.283284 master-0 kubenswrapper[19715]: I0313 12:56:29.282913 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"7406547996e99614570b400f2761446ca96fd074ef2682b16bccaeb0f2e61049"} Mar 13 12:56:29.283284 master-0 kubenswrapper[19715]: I0313 12:56:29.282995 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"d39d1a4092fddef915bc978b34d3e113c223ff23f74078059237562de29bb5f3"} Mar 13 12:56:29.283284 master-0 kubenswrapper[19715]: I0313 12:56:29.283017 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"3f1924a1eb32dec72c519ecf44eb713feab046ddacc4e48c989d9c078dafec99"} Mar 13 12:56:29.283284 master-0 kubenswrapper[19715]: I0313 12:56:29.283033 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"ac30ce6b7b3b44e0cdb560f6acafc4194500db88e6fdeb922588f3c7ba3f2f9f"} Mar 13 12:56:29.283284 master-0 kubenswrapper[19715]: I0313 12:56:29.283053 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"6d33940cc04eb6f5c66537b637f72c706f72969773063c7b650464228e4e37d3"} Mar 13 12:56:30.296092 master-0 kubenswrapper[19715]: I0313 12:56:30.295998 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"31f6a3b3-4e57-48bd-b40e-308ba4a2cd90","Type":"ContainerStarted","Data":"9cd0313c65fe929055b5487208e91f8f3a48867d8adf662050310b0a17dc44f6"} Mar 13 12:56:30.339334 master-0 kubenswrapper[19715]: I0313 12:56:30.339186 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=4.339143489 podStartE2EDuration="4.339143489s" podCreationTimestamp="2026-03-13 12:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:30.325071541 +0000 UTC m=+416.891744318" watchObservedRunningTime="2026-03-13 12:56:30.339143489 +0000 UTC m=+416.905816246" Mar 13 12:56:31.775101 master-0 kubenswrapper[19715]: I0313 12:56:31.774990 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:31.775101 master-0 kubenswrapper[19715]: I0313 12:56:31.775085 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:34.161429 master-0 kubenswrapper[19715]: I0313 12:56:34.161339 19715 scope.go:117] "RemoveContainer" containerID="9eb4b2e62b81effa2b30fc9741ea362aa4ef66b19b64c96e124eb88cbf1ef364" Mar 13 12:56:34.187113 master-0 kubenswrapper[19715]: I0313 12:56:34.187060 19715 scope.go:117] "RemoveContainer" containerID="4f7ff4562a79b8bd2c0cbb72f384270ed3c70b557b5276791fba9d8debdb7623" Mar 13 12:56:36.335525 master-0 kubenswrapper[19715]: I0313 12:56:36.335438 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:36.335525 master-0 kubenswrapper[19715]: I0313 12:56:36.335510 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:56:36.340357 master-0 kubenswrapper[19715]: I0313 12:56:36.340281 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:56:36.340654 master-0 kubenswrapper[19715]: I0313 12:56:36.340363 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:56:36.616431 master-0 kubenswrapper[19715]: I0313 12:56:36.616189 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:36.617150 master-0 kubenswrapper[19715]: I0313 12:56:36.617103 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:36.618308 master-0 kubenswrapper[19715]: I0313 12:56:36.618237 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:36.620683 master-0 kubenswrapper[19715]: I0313 12:56:36.620567 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"oauth-openshift-5fc8d565d6-jmhct\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:36.812175 master-0 kubenswrapper[19715]: I0313 12:56:36.812025 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:37.244205 master-0 kubenswrapper[19715]: I0313 12:56:37.244112 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:56:37.251893 master-0 kubenswrapper[19715]: I0313 12:56:37.251830 19715 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 12:56:37.368983 master-0 kubenswrapper[19715]: I0313 12:56:37.367722 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" event={"ID":"16eeeff8-7c53-4c3e-876a-cff0902955fd","Type":"ContainerStarted","Data":"34c459eedb3b5cedb03c32d18256fa4597eaeb1779e6e28e4cf9e124887cd33c"} Mar 13 12:56:40.411139 master-0 kubenswrapper[19715]: I0313 12:56:40.411026 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" event={"ID":"16eeeff8-7c53-4c3e-876a-cff0902955fd","Type":"ContainerStarted","Data":"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672"} Mar 13 12:56:40.411824 master-0 kubenswrapper[19715]: I0313 12:56:40.411361 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:40.434414 master-0 kubenswrapper[19715]: I0313 12:56:40.434288 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" podStartSLOduration=33.851707817 podStartE2EDuration="36.434250292s" podCreationTimestamp="2026-03-13 12:56:04 +0000 UTC" firstStartedPulling="2026-03-13 12:56:37.251652711 +0000 UTC m=+423.818325488" lastFinishedPulling="2026-03-13 12:56:39.834195216 +0000 UTC m=+426.400867963" observedRunningTime="2026-03-13 12:56:40.431786002 +0000 UTC m=+426.998458759" watchObservedRunningTime="2026-03-13 12:56:40.434250292 +0000 UTC m=+427.000923069" Mar 13 12:56:40.507085 master-0 kubenswrapper[19715]: I0313 12:56:40.506989 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:56:41.716986 master-0 kubenswrapper[19715]: I0313 12:56:41.716920 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 13 12:56:41.718187 master-0 kubenswrapper[19715]: I0313 12:56:41.718156 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.724774 master-0 kubenswrapper[19715]: I0313 12:56:41.724707 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-7gz29" Mar 13 12:56:41.725085 master-0 kubenswrapper[19715]: I0313 12:56:41.725044 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 12:56:41.726335 master-0 kubenswrapper[19715]: I0313 12:56:41.726286 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 13 12:56:41.750409 master-0 kubenswrapper[19715]: I0313 12:56:41.750349 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.750732 master-0 kubenswrapper[19715]: I0313 12:56:41.750468 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.775214 master-0 kubenswrapper[19715]: I0313 12:56:41.775157 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:41.775508 master-0 kubenswrapper[19715]: I0313 12:56:41.775248 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:41.852170 master-0 kubenswrapper[19715]: I0313 12:56:41.852101 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.852972 master-0 kubenswrapper[19715]: I0313 12:56:41.852189 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.852972 master-0 kubenswrapper[19715]: I0313 12:56:41.852550 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:41.870502 master-0 kubenswrapper[19715]: I0313 12:56:41.870427 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access\") pod \"revision-pruner-6-master-0\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:42.059138 master-0 kubenswrapper[19715]: I0313 12:56:42.058981 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:42.513823 master-0 kubenswrapper[19715]: I0313 12:56:42.513736 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-master-0"] Mar 13 12:56:42.518837 master-0 kubenswrapper[19715]: W0313 12:56:42.518784 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod14e1e65d_a285_4c57_bcdd_b0dc6ebe8f76.slice/crio-e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043 WatchSource:0}: Error finding container e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043: Status 404 returned error can't find the container with id e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043 Mar 13 12:56:43.444645 master-0 kubenswrapper[19715]: I0313 12:56:43.444568 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76","Type":"ContainerStarted","Data":"1d499aec3eb408d2fc3db958e07355aae0f6bf17dfda93b4a2a1634810201c06"} Mar 13 12:56:43.444645 master-0 kubenswrapper[19715]: I0313 12:56:43.444641 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76","Type":"ContainerStarted","Data":"e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043"} Mar 13 12:56:43.479970 master-0 kubenswrapper[19715]: I0313 12:56:43.479636 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-master-0" podStartSLOduration=2.479597082 podStartE2EDuration="2.479597082s" podCreationTimestamp="2026-03-13 12:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:56:43.464628415 +0000 UTC m=+430.031301262" watchObservedRunningTime="2026-03-13 12:56:43.479597082 +0000 UTC m=+430.046269849" Mar 13 12:56:44.457257 master-0 kubenswrapper[19715]: I0313 12:56:44.457165 19715 generic.go:334] "Generic (PLEG): container finished" podID="14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" containerID="1d499aec3eb408d2fc3db958e07355aae0f6bf17dfda93b4a2a1634810201c06" exitCode=0 Mar 13 12:56:44.458311 master-0 kubenswrapper[19715]: I0313 12:56:44.457297 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76","Type":"ContainerDied","Data":"1d499aec3eb408d2fc3db958e07355aae0f6bf17dfda93b4a2a1634810201c06"} Mar 13 12:56:45.769483 master-0 kubenswrapper[19715]: I0313 12:56:45.769404 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:45.859646 master-0 kubenswrapper[19715]: I0313 12:56:45.859544 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access\") pod \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " Mar 13 12:56:45.859902 master-0 kubenswrapper[19715]: I0313 12:56:45.859752 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir\") pod \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\" (UID: \"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76\") " Mar 13 12:56:45.860297 master-0 kubenswrapper[19715]: I0313 12:56:45.860243 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" (UID: "14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:56:45.861099 master-0 kubenswrapper[19715]: I0313 12:56:45.861059 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:45.862886 master-0 kubenswrapper[19715]: I0313 12:56:45.862852 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" (UID: "14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:56:45.963414 master-0 kubenswrapper[19715]: I0313 12:56:45.963293 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:46.336258 master-0 kubenswrapper[19715]: I0313 12:56:46.336156 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:56:46.336258 master-0 kubenswrapper[19715]: I0313 12:56:46.336252 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:56:46.479884 master-0 kubenswrapper[19715]: I0313 12:56:46.479737 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-master-0" event={"ID":"14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76","Type":"ContainerDied","Data":"e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043"} Mar 13 12:56:46.479884 master-0 kubenswrapper[19715]: I0313 12:56:46.479850 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-master-0" Mar 13 12:56:46.480879 master-0 kubenswrapper[19715]: I0313 12:56:46.479855 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e37e10d2cd1de99ac867a1def1c24b181fffe4a7eff6ae144843a43af9155043" Mar 13 12:56:51.012806 master-0 kubenswrapper[19715]: I0313 12:56:51.012524 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-b649d7df7-lm9xz" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" containerID="cri-o://ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e" gracePeriod=15 Mar 13 12:56:51.466667 master-0 kubenswrapper[19715]: I0313 12:56:51.463664 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b649d7df7-lm9xz_3d6f2f8a-af35-43a1-8baf-fe3e731acba1/console/0.log" Mar 13 12:56:51.466667 master-0 kubenswrapper[19715]: I0313 12:56:51.463809 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:56:51.531815 master-0 kubenswrapper[19715]: I0313 12:56:51.531753 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-b649d7df7-lm9xz_3d6f2f8a-af35-43a1-8baf-fe3e731acba1/console/0.log" Mar 13 12:56:51.531815 master-0 kubenswrapper[19715]: I0313 12:56:51.531819 19715 generic.go:334] "Generic (PLEG): container finished" podID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerID="ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e" exitCode=2 Mar 13 12:56:51.532253 master-0 kubenswrapper[19715]: I0313 12:56:51.531863 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b649d7df7-lm9xz" event={"ID":"3d6f2f8a-af35-43a1-8baf-fe3e731acba1","Type":"ContainerDied","Data":"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e"} Mar 13 12:56:51.532253 master-0 kubenswrapper[19715]: I0313 12:56:51.531898 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-b649d7df7-lm9xz" event={"ID":"3d6f2f8a-af35-43a1-8baf-fe3e731acba1","Type":"ContainerDied","Data":"268fa8b24649b2535d636e5afbb81d5567d65d515fff43c2b7874859144ab4a1"} Mar 13 12:56:51.532253 master-0 kubenswrapper[19715]: I0313 12:56:51.531919 19715 scope.go:117] "RemoveContainer" containerID="ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e" Mar 13 12:56:51.532253 master-0 kubenswrapper[19715]: I0313 12:56:51.531932 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-b649d7df7-lm9xz" Mar 13 12:56:51.554913 master-0 kubenswrapper[19715]: I0313 12:56:51.554853 19715 scope.go:117] "RemoveContainer" containerID="ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e" Mar 13 12:56:51.555533 master-0 kubenswrapper[19715]: E0313 12:56:51.555478 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e\": container with ID starting with ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e not found: ID does not exist" containerID="ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e" Mar 13 12:56:51.555626 master-0 kubenswrapper[19715]: I0313 12:56:51.555547 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e"} err="failed to get container status \"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e\": rpc error: code = NotFound desc = could not find container \"ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e\": container with ID starting with ae0eef76aa3a75df935d1275cd58b46f851f31bbad39bb86bd9b578b5c50291e not found: ID does not exist" Mar 13 12:56:51.607497 master-0 kubenswrapper[19715]: I0313 12:56:51.607402 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607534 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607684 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607727 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607832 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607893 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct5vs\" (UniqueName: \"kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.607926 master-0 kubenswrapper[19715]: I0313 12:56:51.607920 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert\") pod \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\" (UID: \"3d6f2f8a-af35-43a1-8baf-fe3e731acba1\") " Mar 13 12:56:51.610044 master-0 kubenswrapper[19715]: I0313 12:56:51.609975 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:51.610288 master-0 kubenswrapper[19715]: I0313 12:56:51.610186 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:51.610451 master-0 kubenswrapper[19715]: I0313 12:56:51.610365 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca" (OuterVolumeSpecName: "service-ca") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:51.610803 master-0 kubenswrapper[19715]: I0313 12:56:51.610670 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config" (OuterVolumeSpecName: "console-config") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:56:51.613159 master-0 kubenswrapper[19715]: I0313 12:56:51.613094 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:51.613788 master-0 kubenswrapper[19715]: I0313 12:56:51.613682 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:56:51.613947 master-0 kubenswrapper[19715]: I0313 12:56:51.613789 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs" (OuterVolumeSpecName: "kube-api-access-ct5vs") pod "3d6f2f8a-af35-43a1-8baf-fe3e731acba1" (UID: "3d6f2f8a-af35-43a1-8baf-fe3e731acba1"). InnerVolumeSpecName "kube-api-access-ct5vs". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709495 19715 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709545 19715 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709557 19715 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709568 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct5vs\" (UniqueName: \"kubernetes.io/projected/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-kube-api-access-ct5vs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709591 19715 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709600 19715 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.710185 master-0 kubenswrapper[19715]: I0313 12:56:51.709608 19715 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3d6f2f8a-af35-43a1-8baf-fe3e731acba1-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:56:51.775547 master-0 kubenswrapper[19715]: I0313 12:56:51.775421 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:56:51.775547 master-0 kubenswrapper[19715]: I0313 12:56:51.775539 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:56:51.863868 master-0 kubenswrapper[19715]: I0313 12:56:51.863772 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-b649d7df7-lm9xz"] Mar 13 12:56:51.871385 master-0 kubenswrapper[19715]: I0313 12:56:51.871308 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-b649d7df7-lm9xz"] Mar 13 12:56:53.249086 master-0 kubenswrapper[19715]: I0313 12:56:53.248987 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:56:53.705029 master-0 kubenswrapper[19715]: I0313 12:56:53.704920 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" path="/var/lib/kubelet/pods/3d6f2f8a-af35-43a1-8baf-fe3e731acba1/volumes" Mar 13 12:56:55.717911 master-0 kubenswrapper[19715]: I0313 12:56:55.717783 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:56:55.718963 master-0 kubenswrapper[19715]: I0313 12:56:55.717923 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:56:56.336241 master-0 kubenswrapper[19715]: I0313 12:56:56.336131 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:56:56.336802 master-0 kubenswrapper[19715]: I0313 12:56:56.336268 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:01.775590 master-0 kubenswrapper[19715]: I0313 12:57:01.775461 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:01.776561 master-0 kubenswrapper[19715]: I0313 12:57:01.775651 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:06.336622 master-0 kubenswrapper[19715]: I0313 12:57:06.336481 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:06.336622 master-0 kubenswrapper[19715]: I0313 12:57:06.336604 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:11.775499 master-0 kubenswrapper[19715]: I0313 12:57:11.775187 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:11.775499 master-0 kubenswrapper[19715]: I0313 12:57:11.775349 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:16.337805 master-0 kubenswrapper[19715]: I0313 12:57:16.337611 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:16.337805 master-0 kubenswrapper[19715]: I0313 12:57:16.337721 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: I0313 12:57:17.162675 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: E0313 12:57:17.163077 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" containerName="pruner" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: I0313 12:57:17.163096 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" containerName="pruner" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: E0313 12:57:17.163130 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: I0313 12:57:17.163136 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: I0313 12:57:17.163294 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="14e1e65d-a285-4c57-bcdd-b0dc6ebe8f76" containerName="pruner" Mar 13 12:57:17.164488 master-0 kubenswrapper[19715]: I0313 12:57:17.163317 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="3d6f2f8a-af35-43a1-8baf-fe3e731acba1" containerName="console" Mar 13 12:57:17.171600 master-0 kubenswrapper[19715]: I0313 12:57:17.167716 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.174495 master-0 kubenswrapper[19715]: I0313 12:57:17.174472 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 12:57:17.175502 master-0 kubenswrapper[19715]: I0313 12:57:17.174913 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-jdg75" Mar 13 12:57:17.191530 master-0 kubenswrapper[19715]: I0313 12:57:17.191426 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 13 12:57:17.256906 master-0 kubenswrapper[19715]: I0313 12:57:17.256796 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.257249 master-0 kubenswrapper[19715]: I0313 12:57:17.256926 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.257249 master-0 kubenswrapper[19715]: I0313 12:57:17.257015 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.358326 master-0 kubenswrapper[19715]: I0313 12:57:17.358262 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.359002 master-0 kubenswrapper[19715]: I0313 12:57:17.358368 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.359002 master-0 kubenswrapper[19715]: I0313 12:57:17.358438 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.359002 master-0 kubenswrapper[19715]: I0313 12:57:17.358369 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.359002 master-0 kubenswrapper[19715]: I0313 12:57:17.358532 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.388926 master-0 kubenswrapper[19715]: I0313 12:57:17.388819 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access\") pod \"installer-6-master-0\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:17.522759 master-0 kubenswrapper[19715]: I0313 12:57:17.522595 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:18.001471 master-0 kubenswrapper[19715]: I0313 12:57:18.001414 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-6-master-0"] Mar 13 12:57:18.277887 master-0 kubenswrapper[19715]: I0313 12:57:18.277781 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" podUID="16eeeff8-7c53-4c3e-876a-cff0902955fd" containerName="oauth-openshift" containerID="cri-o://a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672" gracePeriod=15 Mar 13 12:57:18.754314 master-0 kubenswrapper[19715]: I0313 12:57:18.754235 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.788822 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-62f56\" (UniqueName: \"kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.788908 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.788944 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.788980 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.789020 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.791486 master-0 kubenswrapper[19715]: I0313 12:57:18.790326 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:18.793625 master-0 kubenswrapper[19715]: I0313 12:57:18.793590 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.794125 master-0 kubenswrapper[19715]: I0313 12:57:18.794060 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.794867 master-0 kubenswrapper[19715]: I0313 12:57:18.794818 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.796196 master-0 kubenswrapper[19715]: I0313 12:57:18.796142 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.799633 master-0 kubenswrapper[19715]: I0313 12:57:18.799591 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56" (OuterVolumeSpecName: "kube-api-access-62f56") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "kube-api-access-62f56". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:57:18.852221 master-0 kubenswrapper[19715]: I0313 12:57:18.852150 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:18.852756 master-0 kubenswrapper[19715]: E0313 12:57:18.852722 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="16eeeff8-7c53-4c3e-876a-cff0902955fd" containerName="oauth-openshift" Mar 13 12:57:18.852756 master-0 kubenswrapper[19715]: I0313 12:57:18.852748 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="16eeeff8-7c53-4c3e-876a-cff0902955fd" containerName="oauth-openshift" Mar 13 12:57:18.852912 master-0 kubenswrapper[19715]: I0313 12:57:18.852892 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="16eeeff8-7c53-4c3e-876a-cff0902955fd" containerName="oauth-openshift" Mar 13 12:57:18.853622 master-0 kubenswrapper[19715]: I0313 12:57:18.853569 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896301 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896360 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896389 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896413 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896444 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896499 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896524 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.896651 master-0 kubenswrapper[19715]: I0313 12:57:18.896601 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") pod \"16eeeff8-7c53-4c3e-876a-cff0902955fd\" (UID: \"16eeeff8-7c53-4c3e-876a-cff0902955fd\") " Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.896778 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.896970 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897047 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-62f56\" (UniqueName: \"kubernetes.io/projected/16eeeff8-7c53-4c3e-876a-cff0902955fd-kube-api-access-62f56\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897080 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897100 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897308 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897327 19715 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:18.898237 master-0 kubenswrapper[19715]: I0313 12:57:18.897856 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:18.898730 master-0 kubenswrapper[19715]: I0313 12:57:18.898663 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:18.908456 master-0 kubenswrapper[19715]: I0313 12:57:18.908368 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.909062 master-0 kubenswrapper[19715]: I0313 12:57:18.908952 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.909339 master-0 kubenswrapper[19715]: I0313 12:57:18.909055 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.915913 master-0 kubenswrapper[19715]: I0313 12:57:18.910403 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "16eeeff8-7c53-4c3e-876a-cff0902955fd" (UID: "16eeeff8-7c53-4c3e-876a-cff0902955fd"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:18.928729 master-0 kubenswrapper[19715]: I0313 12:57:18.928659 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:18.982911 master-0 kubenswrapper[19715]: I0313 12:57:18.982793 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"b1e97610-42e2-4c62-82c8-787d8a4c8a05","Type":"ContainerStarted","Data":"d06ff9d9ed53d9358cd6299f91967ef9d40878585f064748b01a8d7986aee1f5"} Mar 13 12:57:18.982911 master-0 kubenswrapper[19715]: I0313 12:57:18.982883 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"b1e97610-42e2-4c62-82c8-787d8a4c8a05","Type":"ContainerStarted","Data":"c04372d475da0a9efff3c4bf26bf2e2ed6d8dca0c554d9c6efe958bbfaddc62d"} Mar 13 12:57:18.988602 master-0 kubenswrapper[19715]: I0313 12:57:18.988518 19715 generic.go:334] "Generic (PLEG): container finished" podID="16eeeff8-7c53-4c3e-876a-cff0902955fd" containerID="a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672" exitCode=0 Mar 13 12:57:18.988602 master-0 kubenswrapper[19715]: I0313 12:57:18.988595 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" event={"ID":"16eeeff8-7c53-4c3e-876a-cff0902955fd","Type":"ContainerDied","Data":"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672"} Mar 13 12:57:18.988756 master-0 kubenswrapper[19715]: I0313 12:57:18.988630 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" event={"ID":"16eeeff8-7c53-4c3e-876a-cff0902955fd","Type":"ContainerDied","Data":"34c459eedb3b5cedb03c32d18256fa4597eaeb1779e6e28e4cf9e124887cd33c"} Mar 13 12:57:18.988756 master-0 kubenswrapper[19715]: I0313 12:57:18.988650 19715 scope.go:117] "RemoveContainer" containerID="a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672" Mar 13 12:57:18.988886 master-0 kubenswrapper[19715]: I0313 12:57:18.988600 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5fc8d565d6-jmhct" Mar 13 12:57:18.999603 master-0 kubenswrapper[19715]: I0313 12:57:18.999515 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56q46\" (UniqueName: \"kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999724 master-0 kubenswrapper[19715]: I0313 12:57:18.999614 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999724 master-0 kubenswrapper[19715]: I0313 12:57:18.999680 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999867 master-0 kubenswrapper[19715]: I0313 12:57:18.999725 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999867 master-0 kubenswrapper[19715]: I0313 12:57:18.999767 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999867 master-0 kubenswrapper[19715]: I0313 12:57:18.999805 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999867 master-0 kubenswrapper[19715]: I0313 12:57:18.999830 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:18.999867 master-0 kubenswrapper[19715]: I0313 12:57:18.999856 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:18.999895 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:18.999933 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:18.999968 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000013 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000041 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000133 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000150 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000165 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000170 master-0 kubenswrapper[19715]: I0313 12:57:19.000180 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000932 master-0 kubenswrapper[19715]: I0313 12:57:19.000197 19715 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/16eeeff8-7c53-4c3e-876a-cff0902955fd-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000932 master-0 kubenswrapper[19715]: I0313 12:57:19.000213 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.000932 master-0 kubenswrapper[19715]: I0313 12:57:19.000230 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/16eeeff8-7c53-4c3e-876a-cff0902955fd-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:19.015070 master-0 kubenswrapper[19715]: I0313 12:57:19.015014 19715 scope.go:117] "RemoveContainer" containerID="a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672" Mar 13 12:57:19.016335 master-0 kubenswrapper[19715]: E0313 12:57:19.016282 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672\": container with ID starting with a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672 not found: ID does not exist" containerID="a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672" Mar 13 12:57:19.016335 master-0 kubenswrapper[19715]: I0313 12:57:19.016321 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672"} err="failed to get container status \"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672\": rpc error: code = NotFound desc = could not find container \"a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672\": container with ID starting with a13d8435888ce199e6ae81974dd9eff3b1dd8d942b25c992ebe9e7b4c53b9672 not found: ID does not exist" Mar 13 12:57:19.091762 master-0 kubenswrapper[19715]: I0313 12:57:19.091382 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-6-master-0" podStartSLOduration=2.091350211 podStartE2EDuration="2.091350211s" podCreationTimestamp="2026-03-13 12:57:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:57:19.056917271 +0000 UTC m=+465.623590038" watchObservedRunningTime="2026-03-13 12:57:19.091350211 +0000 UTC m=+465.658022988" Mar 13 12:57:19.094643 master-0 kubenswrapper[19715]: I0313 12:57:19.094568 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:57:19.101850 master-0 kubenswrapper[19715]: I0313 12:57:19.101744 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.101850 master-0 kubenswrapper[19715]: I0313 12:57:19.101865 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102382 master-0 kubenswrapper[19715]: I0313 12:57:19.101915 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102382 master-0 kubenswrapper[19715]: I0313 12:57:19.101963 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102382 master-0 kubenswrapper[19715]: I0313 12:57:19.102238 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102559 master-0 kubenswrapper[19715]: I0313 12:57:19.102394 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56q46\" (UniqueName: \"kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102559 master-0 kubenswrapper[19715]: I0313 12:57:19.102471 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102714 master-0 kubenswrapper[19715]: I0313 12:57:19.102568 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.102985 master-0 kubenswrapper[19715]: I0313 12:57:19.102949 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103087 master-0 kubenswrapper[19715]: I0313 12:57:19.103015 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103087 master-0 kubenswrapper[19715]: I0313 12:57:19.103034 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103655 master-0 kubenswrapper[19715]: I0313 12:57:19.103607 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103841 master-0 kubenswrapper[19715]: I0313 12:57:19.103813 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103917 master-0 kubenswrapper[19715]: I0313 12:57:19.103865 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.103975 master-0 kubenswrapper[19715]: I0313 12:57:19.103922 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.106509 master-0 kubenswrapper[19715]: I0313 12:57:19.105683 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.106509 master-0 kubenswrapper[19715]: I0313 12:57:19.106229 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.109092 master-0 kubenswrapper[19715]: I0313 12:57:19.109049 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.109207 master-0 kubenswrapper[19715]: I0313 12:57:19.109131 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.109207 master-0 kubenswrapper[19715]: I0313 12:57:19.109147 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.110166 master-0 kubenswrapper[19715]: I0313 12:57:19.109933 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.112606 master-0 kubenswrapper[19715]: I0313 12:57:19.112474 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.112606 master-0 kubenswrapper[19715]: I0313 12:57:19.112522 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.112836 master-0 kubenswrapper[19715]: I0313 12:57:19.112786 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.113313 master-0 kubenswrapper[19715]: I0313 12:57:19.113259 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.113445 master-0 kubenswrapper[19715]: I0313 12:57:19.113397 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-5fc8d565d6-jmhct"] Mar 13 12:57:19.127254 master-0 kubenswrapper[19715]: I0313 12:57:19.127178 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56q46\" (UniqueName: \"kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46\") pod \"oauth-openshift-7684d47ff-6fs5b\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.224078 master-0 kubenswrapper[19715]: I0313 12:57:19.223985 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:19.708171 master-0 kubenswrapper[19715]: I0313 12:57:19.708030 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16eeeff8-7c53-4c3e-876a-cff0902955fd" path="/var/lib/kubelet/pods/16eeeff8-7c53-4c3e-876a-cff0902955fd/volumes" Mar 13 12:57:19.744680 master-0 kubenswrapper[19715]: I0313 12:57:19.744490 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:19.747137 master-0 kubenswrapper[19715]: W0313 12:57:19.746879 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195e6d6e_01a1_44aa_8ebe_b1379da7808f.slice/crio-62cdcf63259877f2212b62af67802ff8db0fbf4835c91979b960d1420adc91bb WatchSource:0}: Error finding container 62cdcf63259877f2212b62af67802ff8db0fbf4835c91979b960d1420adc91bb: Status 404 returned error can't find the container with id 62cdcf63259877f2212b62af67802ff8db0fbf4835c91979b960d1420adc91bb Mar 13 12:57:20.001371 master-0 kubenswrapper[19715]: I0313 12:57:20.001227 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" event={"ID":"195e6d6e-01a1-44aa-8ebe-b1379da7808f","Type":"ContainerStarted","Data":"62cdcf63259877f2212b62af67802ff8db0fbf4835c91979b960d1420adc91bb"} Mar 13 12:57:20.689668 master-0 kubenswrapper[19715]: I0313 12:57:20.689520 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:21.017779 master-0 kubenswrapper[19715]: I0313 12:57:21.017523 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" event={"ID":"195e6d6e-01a1-44aa-8ebe-b1379da7808f","Type":"ContainerStarted","Data":"9363b270d5da0dd2d6ce7015e22554c34b8b2d3dd2db9770b50b0760d4a2e526"} Mar 13 12:57:21.018635 master-0 kubenswrapper[19715]: I0313 12:57:21.018079 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:21.025689 master-0 kubenswrapper[19715]: I0313 12:57:21.025611 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:21.057534 master-0 kubenswrapper[19715]: I0313 12:57:21.057357 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" podStartSLOduration=28.057311051 podStartE2EDuration="28.057311051s" podCreationTimestamp="2026-03-13 12:56:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:57:21.047911245 +0000 UTC m=+467.614584002" watchObservedRunningTime="2026-03-13 12:57:21.057311051 +0000 UTC m=+467.623983808" Mar 13 12:57:21.775602 master-0 kubenswrapper[19715]: I0313 12:57:21.775486 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:21.776059 master-0 kubenswrapper[19715]: I0313 12:57:21.775641 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:25.717156 master-0 kubenswrapper[19715]: I0313 12:57:25.717101 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:57:25.717824 master-0 kubenswrapper[19715]: I0313 12:57:25.717169 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:57:26.336617 master-0 kubenswrapper[19715]: I0313 12:57:26.336466 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:26.336933 master-0 kubenswrapper[19715]: I0313 12:57:26.336635 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:31.775831 master-0 kubenswrapper[19715]: I0313 12:57:31.775740 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:31.776595 master-0 kubenswrapper[19715]: I0313 12:57:31.775860 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:34.342937 master-0 kubenswrapper[19715]: I0313 12:57:34.342833 19715 scope.go:117] "RemoveContainer" containerID="e0df16178a78e597a7ee479c2a01d936d3b8faaeddfcab7a0e0bd1705858f6b0" Mar 13 12:57:34.364846 master-0 kubenswrapper[19715]: I0313 12:57:34.364791 19715 scope.go:117] "RemoveContainer" containerID="5e26810c41b04d6b7b18d460530be0d6b5cfdaf88d1a68d92b5c14e7b7261ce3" Mar 13 12:57:34.389934 master-0 kubenswrapper[19715]: I0313 12:57:34.389878 19715 scope.go:117] "RemoveContainer" containerID="9558436851ea5e9f09168e4882a85b318bea857709da4a1c87ae463ce4701ae4" Mar 13 12:57:34.409078 master-0 kubenswrapper[19715]: I0313 12:57:34.409027 19715 scope.go:117] "RemoveContainer" containerID="4e2cfb308e87476917dc63b51b8f4ff3598a6a7c3eff81f201ee2f39a779bdc1" Mar 13 12:57:35.094526 master-0 kubenswrapper[19715]: I0313 12:57:35.094444 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:57:35.095682 master-0 kubenswrapper[19715]: I0313 12:57:35.095622 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.099217 master-0 kubenswrapper[19715]: I0313 12:57:35.099155 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 12:57:35.099488 master-0 kubenswrapper[19715]: I0313 12:57:35.099446 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-phtzh" Mar 13 12:57:35.110805 master-0 kubenswrapper[19715]: I0313 12:57:35.110695 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:57:35.116748 master-0 kubenswrapper[19715]: I0313 12:57:35.116684 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.117057 master-0 kubenswrapper[19715]: I0313 12:57:35.116799 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.117057 master-0 kubenswrapper[19715]: I0313 12:57:35.116825 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.218749 master-0 kubenswrapper[19715]: I0313 12:57:35.218598 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.218749 master-0 kubenswrapper[19715]: I0313 12:57:35.218712 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.219137 master-0 kubenswrapper[19715]: I0313 12:57:35.218797 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.219137 master-0 kubenswrapper[19715]: I0313 12:57:35.218887 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.219137 master-0 kubenswrapper[19715]: I0313 12:57:35.218929 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.237221 master-0 kubenswrapper[19715]: I0313 12:57:35.237141 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access\") pod \"installer-4-master-0\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.465120 master-0 kubenswrapper[19715]: I0313 12:57:35.465029 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:57:35.941834 master-0 kubenswrapper[19715]: I0313 12:57:35.941790 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:57:36.171201 master-0 kubenswrapper[19715]: I0313 12:57:36.170227 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"652aeb51-3dc3-4346-bc3b-c614852a29d5","Type":"ContainerStarted","Data":"9f2c984610448c7faa41be32bb33605393f3f183b72c8c0277d57584b470eb25"} Mar 13 12:57:36.336007 master-0 kubenswrapper[19715]: I0313 12:57:36.335889 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:36.336007 master-0 kubenswrapper[19715]: I0313 12:57:36.335948 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:37.180906 master-0 kubenswrapper[19715]: I0313 12:57:37.180816 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"652aeb51-3dc3-4346-bc3b-c614852a29d5","Type":"ContainerStarted","Data":"e32480afe0989e751c18352732b5374aa1a3e6ebaf96b27d2ab0674f51ab7033"} Mar 13 12:57:41.775176 master-0 kubenswrapper[19715]: I0313 12:57:41.775063 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:41.775915 master-0 kubenswrapper[19715]: I0313 12:57:41.775213 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:46.335468 master-0 kubenswrapper[19715]: I0313 12:57:46.335388 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:46.336260 master-0 kubenswrapper[19715]: I0313 12:57:46.335489 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:47.062537 master-0 kubenswrapper[19715]: I0313 12:57:47.062436 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" podUID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" containerName="oauth-openshift" containerID="cri-o://9363b270d5da0dd2d6ce7015e22554c34b8b2d3dd2db9770b50b0760d4a2e526" gracePeriod=15 Mar 13 12:57:47.276993 master-0 kubenswrapper[19715]: I0313 12:57:47.276915 19715 generic.go:334] "Generic (PLEG): container finished" podID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" containerID="9363b270d5da0dd2d6ce7015e22554c34b8b2d3dd2db9770b50b0760d4a2e526" exitCode=0 Mar 13 12:57:47.277266 master-0 kubenswrapper[19715]: I0313 12:57:47.276981 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" event={"ID":"195e6d6e-01a1-44aa-8ebe-b1379da7808f","Type":"ContainerDied","Data":"9363b270d5da0dd2d6ce7015e22554c34b8b2d3dd2db9770b50b0760d4a2e526"} Mar 13 12:57:47.504730 master-0 kubenswrapper[19715]: I0313 12:57:47.504657 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:47.539676 master-0 kubenswrapper[19715]: I0313 12:57:47.537711 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-4-master-0" podStartSLOduration=12.537648505 podStartE2EDuration="12.537648505s" podCreationTimestamp="2026-03-13 12:57:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:57:37.208614135 +0000 UTC m=+483.775286912" watchObservedRunningTime="2026-03-13 12:57:47.537648505 +0000 UTC m=+494.104321262" Mar 13 12:57:47.549901 master-0 kubenswrapper[19715]: I0313 12:57:47.549822 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-5468d7b87-r5hj5"] Mar 13 12:57:47.550210 master-0 kubenswrapper[19715]: E0313 12:57:47.550186 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" containerName="oauth-openshift" Mar 13 12:57:47.550294 master-0 kubenswrapper[19715]: I0313 12:57:47.550217 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" containerName="oauth-openshift" Mar 13 12:57:47.550400 master-0 kubenswrapper[19715]: I0313 12:57:47.550369 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" containerName="oauth-openshift" Mar 13 12:57:47.550973 master-0 kubenswrapper[19715]: I0313 12:57:47.550942 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.564931 master-0 kubenswrapper[19715]: I0313 12:57:47.564858 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5468d7b87-r5hj5"] Mar 13 12:57:47.650277 master-0 kubenswrapper[19715]: I0313 12:57:47.650166 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.650277 master-0 kubenswrapper[19715]: I0313 12:57:47.650297 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650374 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650560 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650722 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650817 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650846 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650920 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56q46\" (UniqueName: \"kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650957 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.650982 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651014 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651072 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651107 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir\") pod \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\" (UID: \"195e6d6e-01a1-44aa-8ebe-b1379da7808f\") " Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651400 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-login\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651502 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-dir\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651542 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-error\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.651621 master-0 kubenswrapper[19715]: I0313 12:57:47.651565 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-service-ca\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651681 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651747 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-session\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651788 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651826 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpfvx\" (UniqueName: \"kubernetes.io/projected/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-kube-api-access-hpfvx\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651833 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.651890 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652076 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652190 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652241 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652253 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-router-certs\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652299 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652328 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652399 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-policies\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652527 19715 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-policies\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652550 19715 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/195e6d6e-01a1-44aa-8ebe-b1379da7808f-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652566 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-cliconfig\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.652625 master-0 kubenswrapper[19715]: I0313 12:57:47.652608 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.653727 master-0 kubenswrapper[19715]: I0313 12:57:47.653686 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 12:57:47.654646 master-0 kubenswrapper[19715]: I0313 12:57:47.654605 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.655246 master-0 kubenswrapper[19715]: I0313 12:57:47.655185 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.655602 master-0 kubenswrapper[19715]: I0313 12:57:47.655491 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.655672 master-0 kubenswrapper[19715]: I0313 12:57:47.655639 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.656177 master-0 kubenswrapper[19715]: I0313 12:57:47.656086 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46" (OuterVolumeSpecName: "kube-api-access-56q46") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "kube-api-access-56q46". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:57:47.656252 master-0 kubenswrapper[19715]: I0313 12:57:47.656231 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.658029 master-0 kubenswrapper[19715]: I0313 12:57:47.657985 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.658789 master-0 kubenswrapper[19715]: I0313 12:57:47.658675 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "195e6d6e-01a1-44aa-8ebe-b1379da7808f" (UID: "195e6d6e-01a1-44aa-8ebe-b1379da7808f"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 12:57:47.753803 master-0 kubenswrapper[19715]: I0313 12:57:47.753722 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-dir\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.753803 master-0 kubenswrapper[19715]: I0313 12:57:47.753803 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-error\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.753832 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-service-ca\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.753889 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.753930 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-session\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.753960 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.753995 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hpfvx\" (UniqueName: \"kubernetes.io/projected/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-kube-api-access-hpfvx\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.754040 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.754075 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-router-certs\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.754104 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754129 master-0 kubenswrapper[19715]: I0313 12:57:47.754128 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754158 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-policies\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754191 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-login\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754273 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754298 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754314 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-router-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754328 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-session\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754341 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56q46\" (UniqueName: \"kubernetes.io/projected/195e6d6e-01a1-44aa-8ebe-b1379da7808f-kube-api-access-56q46\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754357 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-login\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754371 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-error\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754385 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-user-template-provider-selection\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.754418 master-0 kubenswrapper[19715]: I0313 12:57:47.754406 19715 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/195e6d6e-01a1-44aa-8ebe-b1379da7808f-v4-0-config-system-ocp-branding-template\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:47.756318 master-0 kubenswrapper[19715]: I0313 12:57:47.756252 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-dir\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.757306 master-0 kubenswrapper[19715]: I0313 12:57:47.757223 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-audit-policies\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.757548 master-0 kubenswrapper[19715]: I0313 12:57:47.757473 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-cliconfig\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.757750 master-0 kubenswrapper[19715]: I0313 12:57:47.757710 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.758236 master-0 kubenswrapper[19715]: I0313 12:57:47.758207 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-service-ca\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.760694 master-0 kubenswrapper[19715]: I0313 12:57:47.760651 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-error\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.761189 master-0 kubenswrapper[19715]: I0313 12:57:47.761144 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-session\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.761431 master-0 kubenswrapper[19715]: I0313 12:57:47.761392 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-login\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.762376 master-0 kubenswrapper[19715]: I0313 12:57:47.762312 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.762653 master-0 kubenswrapper[19715]: I0313 12:57:47.762599 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-router-certs\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.762851 master-0 kubenswrapper[19715]: I0313 12:57:47.762731 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.763261 master-0 kubenswrapper[19715]: I0313 12:57:47.763212 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-v4-0-config-system-serving-cert\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.774765 master-0 kubenswrapper[19715]: I0313 12:57:47.774684 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hpfvx\" (UniqueName: \"kubernetes.io/projected/ea4d792f-b0ff-4316-aeed-2dee2c6f1eea-kube-api-access-hpfvx\") pod \"oauth-openshift-5468d7b87-r5hj5\" (UID: \"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea\") " pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:47.892171 master-0 kubenswrapper[19715]: I0313 12:57:47.892032 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:48.289699 master-0 kubenswrapper[19715]: I0313 12:57:48.289414 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" event={"ID":"195e6d6e-01a1-44aa-8ebe-b1379da7808f","Type":"ContainerDied","Data":"62cdcf63259877f2212b62af67802ff8db0fbf4835c91979b960d1420adc91bb"} Mar 13 12:57:48.289699 master-0 kubenswrapper[19715]: I0313 12:57:48.289457 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-7684d47ff-6fs5b" Mar 13 12:57:48.289699 master-0 kubenswrapper[19715]: I0313 12:57:48.289477 19715 scope.go:117] "RemoveContainer" containerID="9363b270d5da0dd2d6ce7015e22554c34b8b2d3dd2db9770b50b0760d4a2e526" Mar 13 12:57:48.329516 master-0 kubenswrapper[19715]: I0313 12:57:48.329440 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:48.336812 master-0 kubenswrapper[19715]: I0313 12:57:48.336748 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-7684d47ff-6fs5b"] Mar 13 12:57:48.378023 master-0 kubenswrapper[19715]: I0313 12:57:48.377956 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-5468d7b87-r5hj5"] Mar 13 12:57:48.379555 master-0 kubenswrapper[19715]: W0313 12:57:48.379429 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea4d792f_b0ff_4316_aeed_2dee2c6f1eea.slice/crio-c8037721de6d52c5454fd90e4c93cf59710b4aa72cf175d83737c7f77368f408 WatchSource:0}: Error finding container c8037721de6d52c5454fd90e4c93cf59710b4aa72cf175d83737c7f77368f408: Status 404 returned error can't find the container with id c8037721de6d52c5454fd90e4c93cf59710b4aa72cf175d83737c7f77368f408 Mar 13 12:57:49.304748 master-0 kubenswrapper[19715]: I0313 12:57:49.304665 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" event={"ID":"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea","Type":"ContainerStarted","Data":"bde9102e68681138f06bcfe9e5c12e316d40793361f37f7162a28069f6d19786"} Mar 13 12:57:49.304748 master-0 kubenswrapper[19715]: I0313 12:57:49.304725 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" event={"ID":"ea4d792f-b0ff-4316-aeed-2dee2c6f1eea","Type":"ContainerStarted","Data":"c8037721de6d52c5454fd90e4c93cf59710b4aa72cf175d83737c7f77368f408"} Mar 13 12:57:49.305865 master-0 kubenswrapper[19715]: I0313 12:57:49.305111 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:49.315699 master-0 kubenswrapper[19715]: I0313 12:57:49.315624 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" Mar 13 12:57:49.336278 master-0 kubenswrapper[19715]: I0313 12:57:49.336188 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-5468d7b87-r5hj5" podStartSLOduration=16.336167322 podStartE2EDuration="16.336167322s" podCreationTimestamp="2026-03-13 12:57:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:57:49.331136319 +0000 UTC m=+495.897809086" watchObservedRunningTime="2026-03-13 12:57:49.336167322 +0000 UTC m=+495.902840079" Mar 13 12:57:49.706933 master-0 kubenswrapper[19715]: I0313 12:57:49.706866 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="195e6d6e-01a1-44aa-8ebe-b1379da7808f" path="/var/lib/kubelet/pods/195e6d6e-01a1-44aa-8ebe-b1379da7808f/volumes" Mar 13 12:57:51.077258 master-0 kubenswrapper[19715]: I0313 12:57:51.077146 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:57:51.078429 master-0 kubenswrapper[19715]: I0313 12:57:51.077736 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="cluster-policy-controller" containerID="cri-o://6e9a116bda80ce7fe4e93d1c23741a0678a4bf66c268c954fc757c04183b5157" gracePeriod=30 Mar 13 12:57:51.078429 master-0 kubenswrapper[19715]: I0313 12:57:51.077837 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://3676744a93dc4b275eb6a7cc11028760f14bb722b4e049db371fc67c6d22dd94" gracePeriod=30 Mar 13 12:57:51.078429 master-0 kubenswrapper[19715]: I0313 12:57:51.077928 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://9727c8d2dac755dd7ea1b9ad8ff6c17a8b645c1accc27700962725c430cd1484" gracePeriod=30 Mar 13 12:57:51.078429 master-0 kubenswrapper[19715]: I0313 12:57:51.077756 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" containerID="cri-o://91aba06d3555721ac7156a0d1fb3bcdde07eaa20c73d384ae32e60bb0e44531d" gracePeriod=30 Mar 13 12:57:51.080880 master-0 kubenswrapper[19715]: I0313 12:57:51.080309 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:57:51.080880 master-0 kubenswrapper[19715]: E0313 12:57:51.080879 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.080903 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: E0313 12:57:51.080929 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="cluster-policy-controller" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.080938 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="cluster-policy-controller" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: E0313 12:57:51.080969 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-recovery-controller" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.080982 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-recovery-controller" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: E0313 12:57:51.081005 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-cert-syncer" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081012 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-cert-syncer" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081216 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081235 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-cert-syncer" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081249 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081284 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager-recovery-controller" Mar 13 12:57:51.081312 master-0 kubenswrapper[19715]: I0313 12:57:51.081300 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="cluster-policy-controller" Mar 13 12:57:51.082921 master-0 kubenswrapper[19715]: E0313 12:57:51.081469 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.082921 master-0 kubenswrapper[19715]: I0313 12:57:51.081484 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" containerName="kube-controller-manager" Mar 13 12:57:51.218710 master-0 kubenswrapper[19715]: I0313 12:57:51.218612 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.219205 master-0 kubenswrapper[19715]: I0313 12:57:51.219139 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.283704 master-0 kubenswrapper[19715]: I0313 12:57:51.283641 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager-cert-syncer/0.log" Mar 13 12:57:51.284696 master-0 kubenswrapper[19715]: I0313 12:57:51.284655 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager/0.log" Mar 13 12:57:51.284819 master-0 kubenswrapper[19715]: I0313 12:57:51.284788 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.288813 master-0 kubenswrapper[19715]: I0313 12:57:51.288757 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="801e0e0ab4a7a1c742dfa21c487f9cca" podUID="a38f6c36de78a5cb446093c52f21a20d" Mar 13 12:57:51.321160 master-0 kubenswrapper[19715]: I0313 12:57:51.321054 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.321396 master-0 kubenswrapper[19715]: I0313 12:57:51.321233 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.321490 master-0 kubenswrapper[19715]: I0313 12:57:51.321408 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.321490 master-0 kubenswrapper[19715]: I0313 12:57:51.321467 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/a38f6c36de78a5cb446093c52f21a20d-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"a38f6c36de78a5cb446093c52f21a20d\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.324408 master-0 kubenswrapper[19715]: I0313 12:57:51.324370 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager-cert-syncer/0.log" Mar 13 12:57:51.327650 master-0 kubenswrapper[19715]: I0313 12:57:51.327528 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager/0.log" Mar 13 12:57:51.327650 master-0 kubenswrapper[19715]: I0313 12:57:51.327623 19715 generic.go:334] "Generic (PLEG): container finished" podID="801e0e0ab4a7a1c742dfa21c487f9cca" containerID="91aba06d3555721ac7156a0d1fb3bcdde07eaa20c73d384ae32e60bb0e44531d" exitCode=0 Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327656 19715 generic.go:334] "Generic (PLEG): container finished" podID="801e0e0ab4a7a1c742dfa21c487f9cca" containerID="3676744a93dc4b275eb6a7cc11028760f14bb722b4e049db371fc67c6d22dd94" exitCode=0 Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327669 19715 generic.go:334] "Generic (PLEG): container finished" podID="801e0e0ab4a7a1c742dfa21c487f9cca" containerID="9727c8d2dac755dd7ea1b9ad8ff6c17a8b645c1accc27700962725c430cd1484" exitCode=2 Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327679 19715 generic.go:334] "Generic (PLEG): container finished" podID="801e0e0ab4a7a1c742dfa21c487f9cca" containerID="6e9a116bda80ce7fe4e93d1c23741a0678a4bf66c268c954fc757c04183b5157" exitCode=0 Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327733 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327774 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5454f15f77b6cb561e7eb8617b14ada5ba037b44712b3329b67f9663d58382d" Mar 13 12:57:51.327817 master-0 kubenswrapper[19715]: I0313 12:57:51.327801 19715 scope.go:117] "RemoveContainer" containerID="79d2112532eb814b4ddc9e964815bfcf0f82c0b3839cd7c9db7b085901b612ca" Mar 13 12:57:51.331012 master-0 kubenswrapper[19715]: I0313 12:57:51.330961 19715 generic.go:334] "Generic (PLEG): container finished" podID="b1e97610-42e2-4c62-82c8-787d8a4c8a05" containerID="d06ff9d9ed53d9358cd6299f91967ef9d40878585f064748b01a8d7986aee1f5" exitCode=0 Mar 13 12:57:51.331012 master-0 kubenswrapper[19715]: I0313 12:57:51.330998 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"b1e97610-42e2-4c62-82c8-787d8a4c8a05","Type":"ContainerDied","Data":"d06ff9d9ed53d9358cd6299f91967ef9d40878585f064748b01a8d7986aee1f5"} Mar 13 12:57:51.332320 master-0 kubenswrapper[19715]: I0313 12:57:51.332253 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="801e0e0ab4a7a1c742dfa21c487f9cca" podUID="a38f6c36de78a5cb446093c52f21a20d" Mar 13 12:57:51.422880 master-0 kubenswrapper[19715]: I0313 12:57:51.422789 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir\") pod \"801e0e0ab4a7a1c742dfa21c487f9cca\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " Mar 13 12:57:51.423484 master-0 kubenswrapper[19715]: I0313 12:57:51.423043 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir\") pod \"801e0e0ab4a7a1c742dfa21c487f9cca\" (UID: \"801e0e0ab4a7a1c742dfa21c487f9cca\") " Mar 13 12:57:51.423484 master-0 kubenswrapper[19715]: I0313 12:57:51.423073 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "801e0e0ab4a7a1c742dfa21c487f9cca" (UID: "801e0e0ab4a7a1c742dfa21c487f9cca"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:51.423484 master-0 kubenswrapper[19715]: I0313 12:57:51.423181 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "801e0e0ab4a7a1c742dfa21c487f9cca" (UID: "801e0e0ab4a7a1c742dfa21c487f9cca"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:51.425164 master-0 kubenswrapper[19715]: I0313 12:57:51.425102 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:51.425164 master-0 kubenswrapper[19715]: I0313 12:57:51.425160 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/801e0e0ab4a7a1c742dfa21c487f9cca-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:51.655445 master-0 kubenswrapper[19715]: I0313 12:57:51.655350 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="801e0e0ab4a7a1c742dfa21c487f9cca" podUID="a38f6c36de78a5cb446093c52f21a20d" Mar 13 12:57:51.715945 master-0 kubenswrapper[19715]: I0313 12:57:51.715556 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="801e0e0ab4a7a1c742dfa21c487f9cca" path="/var/lib/kubelet/pods/801e0e0ab4a7a1c742dfa21c487f9cca/volumes" Mar 13 12:57:51.775770 master-0 kubenswrapper[19715]: I0313 12:57:51.775675 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:57:51.775770 master-0 kubenswrapper[19715]: I0313 12:57:51.775777 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:57:52.347766 master-0 kubenswrapper[19715]: I0313 12:57:52.347689 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_801e0e0ab4a7a1c742dfa21c487f9cca/kube-controller-manager-cert-syncer/0.log" Mar 13 12:57:52.693343 master-0 kubenswrapper[19715]: I0313 12:57:52.693260 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:52.865196 master-0 kubenswrapper[19715]: I0313 12:57:52.865072 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock\") pod \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " Mar 13 12:57:52.865554 master-0 kubenswrapper[19715]: I0313 12:57:52.865349 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir\") pod \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " Mar 13 12:57:52.865554 master-0 kubenswrapper[19715]: I0313 12:57:52.865328 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock" (OuterVolumeSpecName: "var-lock") pod "b1e97610-42e2-4c62-82c8-787d8a4c8a05" (UID: "b1e97610-42e2-4c62-82c8-787d8a4c8a05"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:52.865554 master-0 kubenswrapper[19715]: I0313 12:57:52.865462 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access\") pod \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\" (UID: \"b1e97610-42e2-4c62-82c8-787d8a4c8a05\") " Mar 13 12:57:52.865825 master-0 kubenswrapper[19715]: I0313 12:57:52.865536 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b1e97610-42e2-4c62-82c8-787d8a4c8a05" (UID: "b1e97610-42e2-4c62-82c8-787d8a4c8a05"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:57:52.869418 master-0 kubenswrapper[19715]: I0313 12:57:52.869339 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:52.869516 master-0 kubenswrapper[19715]: I0313 12:57:52.869435 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:52.870964 master-0 kubenswrapper[19715]: I0313 12:57:52.870877 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b1e97610-42e2-4c62-82c8-787d8a4c8a05" (UID: "b1e97610-42e2-4c62-82c8-787d8a4c8a05"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:57:52.972423 master-0 kubenswrapper[19715]: I0313 12:57:52.972214 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b1e97610-42e2-4c62-82c8-787d8a4c8a05-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:57:53.360426 master-0 kubenswrapper[19715]: I0313 12:57:53.360206 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-6-master-0" event={"ID":"b1e97610-42e2-4c62-82c8-787d8a4c8a05","Type":"ContainerDied","Data":"c04372d475da0a9efff3c4bf26bf2e2ed6d8dca0c554d9c6efe958bbfaddc62d"} Mar 13 12:57:53.360426 master-0 kubenswrapper[19715]: I0313 12:57:53.360291 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c04372d475da0a9efff3c4bf26bf2e2ed6d8dca0c554d9c6efe958bbfaddc62d" Mar 13 12:57:53.360426 master-0 kubenswrapper[19715]: I0313 12:57:53.360340 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-6-master-0" Mar 13 12:57:53.693767 master-0 kubenswrapper[19715]: I0313 12:57:53.693661 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:57:53.694224 master-0 kubenswrapper[19715]: I0313 12:57:53.693962 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-4-master-0" podUID="652aeb51-3dc3-4346-bc3b-c614852a29d5" containerName="installer" containerID="cri-o://e32480afe0989e751c18352732b5374aa1a3e6ebaf96b27d2ab0674f51ab7033" gracePeriod=30 Mar 13 12:57:55.717637 master-0 kubenswrapper[19715]: I0313 12:57:55.717498 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 12:57:55.718399 master-0 kubenswrapper[19715]: I0313 12:57:55.717632 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 12:57:55.718399 master-0 kubenswrapper[19715]: I0313 12:57:55.717700 19715 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" Mar 13 12:57:55.719161 master-0 kubenswrapper[19715]: I0313 12:57:55.719118 19715 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4372efee115bd956f110a1686f5b4492f3fae1a8246f84646b05662580e9f09a"} pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 12:57:55.719234 master-0 kubenswrapper[19715]: I0313 12:57:55.719213 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" containerID="cri-o://4372efee115bd956f110a1686f5b4492f3fae1a8246f84646b05662580e9f09a" gracePeriod=600 Mar 13 12:57:56.335933 master-0 kubenswrapper[19715]: I0313 12:57:56.335852 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:57:56.336258 master-0 kubenswrapper[19715]: I0313 12:57:56.335950 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:57:56.392676 master-0 kubenswrapper[19715]: I0313 12:57:56.392610 19715 generic.go:334] "Generic (PLEG): container finished" podID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerID="4372efee115bd956f110a1686f5b4492f3fae1a8246f84646b05662580e9f09a" exitCode=0 Mar 13 12:57:56.392676 master-0 kubenswrapper[19715]: I0313 12:57:56.392674 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerDied","Data":"4372efee115bd956f110a1686f5b4492f3fae1a8246f84646b05662580e9f09a"} Mar 13 12:57:56.393028 master-0 kubenswrapper[19715]: I0313 12:57:56.392720 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" event={"ID":"e8d83309-58b2-40af-ab48-1f8b9aeffefb","Type":"ContainerStarted","Data":"62105aa96d0babfc437c78b2c7aa57358d98b2d2b26f996495d12e023b0178d7"} Mar 13 12:57:56.393028 master-0 kubenswrapper[19715]: I0313 12:57:56.392740 19715 scope.go:117] "RemoveContainer" containerID="5b99add1353856acea33dcb530c729d1f04a71fe3603e00ce50bcb93fec430ed" Mar 13 12:57:56.892125 master-0 kubenswrapper[19715]: I0313 12:57:56.892040 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:57:56.892839 master-0 kubenswrapper[19715]: E0313 12:57:56.892381 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e97610-42e2-4c62-82c8-787d8a4c8a05" containerName="installer" Mar 13 12:57:56.892839 master-0 kubenswrapper[19715]: I0313 12:57:56.892399 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e97610-42e2-4c62-82c8-787d8a4c8a05" containerName="installer" Mar 13 12:57:56.892839 master-0 kubenswrapper[19715]: I0313 12:57:56.892661 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e97610-42e2-4c62-82c8-787d8a4c8a05" containerName="installer" Mar 13 12:57:56.893195 master-0 kubenswrapper[19715]: I0313 12:57:56.893170 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:56.928847 master-0 kubenswrapper[19715]: I0313 12:57:56.928758 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:56.929150 master-0 kubenswrapper[19715]: I0313 12:57:56.929012 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:56.929150 master-0 kubenswrapper[19715]: I0313 12:57:56.929114 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:56.978651 master-0 kubenswrapper[19715]: I0313 12:57:56.978545 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:57:57.030611 master-0 kubenswrapper[19715]: I0313 12:57:57.030390 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.030611 master-0 kubenswrapper[19715]: I0313 12:57:57.030546 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.030963 master-0 kubenswrapper[19715]: I0313 12:57:57.030646 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.030963 master-0 kubenswrapper[19715]: I0313 12:57:57.030757 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.030963 master-0 kubenswrapper[19715]: I0313 12:57:57.030815 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.071242 master-0 kubenswrapper[19715]: I0313 12:57:57.071155 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access\") pod \"installer-5-master-0\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.213889 master-0 kubenswrapper[19715]: I0313 12:57:57.213809 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:57:57.683125 master-0 kubenswrapper[19715]: I0313 12:57:57.683006 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-5-master-0"] Mar 13 12:57:57.693241 master-0 kubenswrapper[19715]: W0313 12:57:57.693146 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7c65fa87_b404_4e2d_b730_d8e3ae5a0990.slice/crio-af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e WatchSource:0}: Error finding container af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e: Status 404 returned error can't find the container with id af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e Mar 13 12:57:58.420737 master-0 kubenswrapper[19715]: I0313 12:57:58.420559 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7c65fa87-b404-4e2d-b730-d8e3ae5a0990","Type":"ContainerStarted","Data":"fc1b321d58cf5448a6ee13171e2d4356e254c375b3ecb568a421fdcb18558d72"} Mar 13 12:57:58.420737 master-0 kubenswrapper[19715]: I0313 12:57:58.420726 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7c65fa87-b404-4e2d-b730-d8e3ae5a0990","Type":"ContainerStarted","Data":"af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e"} Mar 13 12:58:01.775779 master-0 kubenswrapper[19715]: I0313 12:58:01.775702 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:01.777354 master-0 kubenswrapper[19715]: I0313 12:58:01.775792 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:03.696977 master-0 kubenswrapper[19715]: I0313 12:58:03.696845 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:03.733230 master-0 kubenswrapper[19715]: I0313 12:58:03.733165 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="1f00f82d-9fdf-4677-af46-f68033a9fd57" Mar 13 12:58:03.733506 master-0 kubenswrapper[19715]: I0313 12:58:03.733485 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="1f00f82d-9fdf-4677-af46-f68033a9fd57" Mar 13 12:58:03.748887 master-0 kubenswrapper[19715]: I0313 12:58:03.748812 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:03.757400 master-0 kubenswrapper[19715]: I0313 12:58:03.757289 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-5-master-0" podStartSLOduration=7.757265878 podStartE2EDuration="7.757265878s" podCreationTimestamp="2026-03-13 12:57:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:57:58.807925002 +0000 UTC m=+505.374597759" watchObservedRunningTime="2026-03-13 12:58:03.757265878 +0000 UTC m=+510.323938635" Mar 13 12:58:03.757771 master-0 kubenswrapper[19715]: I0313 12:58:03.757734 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:58:03.763801 master-0 kubenswrapper[19715]: I0313 12:58:03.763755 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:58:03.769035 master-0 kubenswrapper[19715]: I0313 12:58:03.768922 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:03.779209 master-0 kubenswrapper[19715]: I0313 12:58:03.778721 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 12:58:04.473647 master-0 kubenswrapper[19715]: I0313 12:58:04.473571 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"f182e7b092cd27f994dc8bb1b513cc5a74c393912d585aca232e1cd2b6f7782d"} Mar 13 12:58:04.473647 master-0 kubenswrapper[19715]: I0313 12:58:04.473641 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"f357e81908ad10f5e64bb39fb55781d86cbe402be9ed1b0dcf2bf8216bedd524"} Mar 13 12:58:04.473647 master-0 kubenswrapper[19715]: I0313 12:58:04.473651 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"ebf851439df2418d1e52671fee17d0f5597b04f4f43d70de6810bf7d32f4b871"} Mar 13 12:58:05.493084 master-0 kubenswrapper[19715]: I0313 12:58:05.492995 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"636ae260964b8b48322851248394e956e2a0833034e91115a39759567990c6ab"} Mar 13 12:58:05.493084 master-0 kubenswrapper[19715]: I0313 12:58:05.493094 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"789bb4771da1b1d0124fafc5a07b740c52b3a33cd52cbe73ffc205c2f2f63c11"} Mar 13 12:58:06.335599 master-0 kubenswrapper[19715]: I0313 12:58:06.335497 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:06.335911 master-0 kubenswrapper[19715]: I0313 12:58:06.335600 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:07.523913 master-0 kubenswrapper[19715]: I0313 12:58:07.523826 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_652aeb51-3dc3-4346-bc3b-c614852a29d5/installer/0.log" Mar 13 12:58:07.524571 master-0 kubenswrapper[19715]: I0313 12:58:07.523972 19715 generic.go:334] "Generic (PLEG): container finished" podID="652aeb51-3dc3-4346-bc3b-c614852a29d5" containerID="e32480afe0989e751c18352732b5374aa1a3e6ebaf96b27d2ab0674f51ab7033" exitCode=1 Mar 13 12:58:07.524571 master-0 kubenswrapper[19715]: I0313 12:58:07.524095 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"652aeb51-3dc3-4346-bc3b-c614852a29d5","Type":"ContainerDied","Data":"e32480afe0989e751c18352732b5374aa1a3e6ebaf96b27d2ab0674f51ab7033"} Mar 13 12:58:07.693373 master-0 kubenswrapper[19715]: I0313 12:58:07.693284 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_652aeb51-3dc3-4346-bc3b-c614852a29d5/installer/0.log" Mar 13 12:58:07.693749 master-0 kubenswrapper[19715]: I0313 12:58:07.693434 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:58:07.751125 master-0 kubenswrapper[19715]: I0313 12:58:07.748342 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=4.74828434 podStartE2EDuration="4.74828434s" podCreationTimestamp="2026-03-13 12:58:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:58:05.529611124 +0000 UTC m=+512.096283891" watchObservedRunningTime="2026-03-13 12:58:07.74828434 +0000 UTC m=+514.314957097" Mar 13 12:58:07.781401 master-0 kubenswrapper[19715]: I0313 12:58:07.781314 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access\") pod \"652aeb51-3dc3-4346-bc3b-c614852a29d5\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " Mar 13 12:58:07.782163 master-0 kubenswrapper[19715]: I0313 12:58:07.781564 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir\") pod \"652aeb51-3dc3-4346-bc3b-c614852a29d5\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " Mar 13 12:58:07.782163 master-0 kubenswrapper[19715]: I0313 12:58:07.781661 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock\") pod \"652aeb51-3dc3-4346-bc3b-c614852a29d5\" (UID: \"652aeb51-3dc3-4346-bc3b-c614852a29d5\") " Mar 13 12:58:07.783515 master-0 kubenswrapper[19715]: I0313 12:58:07.783469 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "652aeb51-3dc3-4346-bc3b-c614852a29d5" (UID: "652aeb51-3dc3-4346-bc3b-c614852a29d5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:07.783691 master-0 kubenswrapper[19715]: I0313 12:58:07.783518 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock" (OuterVolumeSpecName: "var-lock") pod "652aeb51-3dc3-4346-bc3b-c614852a29d5" (UID: "652aeb51-3dc3-4346-bc3b-c614852a29d5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:07.787312 master-0 kubenswrapper[19715]: I0313 12:58:07.787277 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "652aeb51-3dc3-4346-bc3b-c614852a29d5" (UID: "652aeb51-3dc3-4346-bc3b-c614852a29d5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:07.883526 master-0 kubenswrapper[19715]: I0313 12:58:07.883365 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:07.883526 master-0 kubenswrapper[19715]: I0313 12:58:07.883424 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/652aeb51-3dc3-4346-bc3b-c614852a29d5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:07.883526 master-0 kubenswrapper[19715]: I0313 12:58:07.883438 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/652aeb51-3dc3-4346-bc3b-c614852a29d5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:08.541622 master-0 kubenswrapper[19715]: I0313 12:58:08.541520 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-4-master-0_652aeb51-3dc3-4346-bc3b-c614852a29d5/installer/0.log" Mar 13 12:58:08.542665 master-0 kubenswrapper[19715]: I0313 12:58:08.541694 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-4-master-0" event={"ID":"652aeb51-3dc3-4346-bc3b-c614852a29d5","Type":"ContainerDied","Data":"9f2c984610448c7faa41be32bb33605393f3f183b72c8c0277d57584b470eb25"} Mar 13 12:58:08.542665 master-0 kubenswrapper[19715]: I0313 12:58:08.541898 19715 scope.go:117] "RemoveContainer" containerID="e32480afe0989e751c18352732b5374aa1a3e6ebaf96b27d2ab0674f51ab7033" Mar 13 12:58:08.542665 master-0 kubenswrapper[19715]: I0313 12:58:08.542218 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-4-master-0" Mar 13 12:58:08.608007 master-0 kubenswrapper[19715]: I0313 12:58:08.607896 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:58:08.625752 master-0 kubenswrapper[19715]: I0313 12:58:08.625658 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-4-master-0"] Mar 13 12:58:09.707194 master-0 kubenswrapper[19715]: I0313 12:58:09.707108 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="652aeb51-3dc3-4346-bc3b-c614852a29d5" path="/var/lib/kubelet/pods/652aeb51-3dc3-4346-bc3b-c614852a29d5/volumes" Mar 13 12:58:12.316939 master-0 kubenswrapper[19715]: I0313 12:58:12.315897 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:12.316939 master-0 kubenswrapper[19715]: I0313 12:58:12.315988 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:13.770278 master-0 kubenswrapper[19715]: I0313 12:58:13.769904 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:13.770278 master-0 kubenswrapper[19715]: I0313 12:58:13.769997 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:13.770278 master-0 kubenswrapper[19715]: I0313 12:58:13.770013 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:13.770278 master-0 kubenswrapper[19715]: I0313 12:58:13.770026 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:13.775124 master-0 kubenswrapper[19715]: I0313 12:58:13.775080 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:13.776169 master-0 kubenswrapper[19715]: I0313 12:58:13.776152 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:14.607447 master-0 kubenswrapper[19715]: I0313 12:58:14.607363 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:14.607949 master-0 kubenswrapper[19715]: I0313 12:58:14.607868 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:58:16.336601 master-0 kubenswrapper[19715]: I0313 12:58:16.336495 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:16.337498 master-0 kubenswrapper[19715]: I0313 12:58:16.336631 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:21.775556 master-0 kubenswrapper[19715]: I0313 12:58:21.775441 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:21.776710 master-0 kubenswrapper[19715]: I0313 12:58:21.775568 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:26.336518 master-0 kubenswrapper[19715]: I0313 12:58:26.336402 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:26.336518 master-0 kubenswrapper[19715]: I0313 12:58:26.336511 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:31.775519 master-0 kubenswrapper[19715]: I0313 12:58:31.775418 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:31.775519 master-0 kubenswrapper[19715]: I0313 12:58:31.775531 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:36.335887 master-0 kubenswrapper[19715]: I0313 12:58:36.335813 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:36.336842 master-0 kubenswrapper[19715]: I0313 12:58:36.336768 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:41.775802 master-0 kubenswrapper[19715]: I0313 12:58:41.775671 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:41.775802 master-0 kubenswrapper[19715]: I0313 12:58:41.775749 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:46.106024 master-0 kubenswrapper[19715]: I0313 12:58:46.103954 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:58:46.106024 master-0 kubenswrapper[19715]: E0313 12:58:46.104563 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="652aeb51-3dc3-4346-bc3b-c614852a29d5" containerName="installer" Mar 13 12:58:46.106024 master-0 kubenswrapper[19715]: I0313 12:58:46.104615 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="652aeb51-3dc3-4346-bc3b-c614852a29d5" containerName="installer" Mar 13 12:58:46.106024 master-0 kubenswrapper[19715]: I0313 12:58:46.104854 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="652aeb51-3dc3-4346-bc3b-c614852a29d5" containerName="installer" Mar 13 12:58:46.106024 master-0 kubenswrapper[19715]: I0313 12:58:46.105712 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:58:46.108197 master-0 kubenswrapper[19715]: I0313 12:58:46.107465 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" containerID="cri-o://658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175" gracePeriod=15 Mar 13 12:58:46.108197 master-0 kubenswrapper[19715]: I0313 12:58:46.107877 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.108197 master-0 kubenswrapper[19715]: I0313 12:58:46.107978 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556" gracePeriod=15 Mar 13 12:58:46.108197 master-0 kubenswrapper[19715]: I0313 12:58:46.108021 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" containerID="cri-o://80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641" gracePeriod=15 Mar 13 12:58:46.111251 master-0 kubenswrapper[19715]: I0313 12:58:46.108106 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a" gracePeriod=15 Mar 13 12:58:46.111362 master-0 kubenswrapper[19715]: I0313 12:58:46.108224 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" containerID="cri-o://d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260" gracePeriod=15 Mar 13 12:58:46.111411 master-0 kubenswrapper[19715]: I0313 12:58:46.108608 19715 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:58:46.111917 master-0 kubenswrapper[19715]: E0313 12:58:46.111878 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 12:58:46.111917 master-0 kubenswrapper[19715]: I0313 12:58:46.111909 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: E0313 12:58:46.111933 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: I0313 12:58:46.111943 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: E0313 12:58:46.111982 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: I0313 12:58:46.111991 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="setup" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: E0313 12:58:46.112009 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: I0313 12:58:46.112022 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: E0313 12:58:46.112044 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: I0313 12:58:46.112052 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: E0313 12:58:46.112079 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 12:58:46.112108 master-0 kubenswrapper[19715]: I0313 12:58:46.112088 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 12:58:46.112668 master-0 kubenswrapper[19715]: I0313 12:58:46.112294 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver" Mar 13 12:58:46.112668 master-0 kubenswrapper[19715]: I0313 12:58:46.112334 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-check-endpoints" Mar 13 12:58:46.112668 master-0 kubenswrapper[19715]: I0313 12:58:46.112357 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 12:58:46.112668 master-0 kubenswrapper[19715]: I0313 12:58:46.112375 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-insecure-readyz" Mar 13 12:58:46.112668 master-0 kubenswrapper[19715]: I0313 12:58:46.112392 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="077dd10388b9e3e48a07382126e86621" containerName="kube-apiserver-cert-syncer" Mar 13 12:58:46.120759 master-0 kubenswrapper[19715]: I0313 12:58:46.120656 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.120759 master-0 kubenswrapper[19715]: I0313 12:58:46.120761 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.120804 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.120940 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.121022 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.121060 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.121122 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.121186 master-0 kubenswrapper[19715]: I0313 12:58:46.121171 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.223868 master-0 kubenswrapper[19715]: I0313 12:58:46.223775 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.223915 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.223992 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.224023 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.224061 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.224107 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.224143 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224197 master-0 kubenswrapper[19715]: I0313 12:58:46.224165 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224308 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224368 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224391 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224415 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224439 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224461 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/36d4251d3504cdc0ec85144c1379056c-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"36d4251d3504cdc0ec85144c1379056c\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224490 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.224710 master-0 kubenswrapper[19715]: I0313 12:58:46.224514 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"a814bd60de133d95cf99630a978c017e\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.248739 master-0 kubenswrapper[19715]: E0313 12:58:46.248651 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.249713 master-0 kubenswrapper[19715]: I0313 12:58:46.249559 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.299257 master-0 kubenswrapper[19715]: E0313 12:58:46.299024 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c68058b150b44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,LastTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:58:46.337023 master-0 kubenswrapper[19715]: I0313 12:58:46.336912 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:46.337487 master-0 kubenswrapper[19715]: I0313 12:58:46.337046 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:46.909115 master-0 kubenswrapper[19715]: I0313 12:58:46.908956 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"80ebbe3e603359bd048440fc40f845e6a835449f423798047a8be9d5a8e84ded"} Mar 13 12:58:46.909762 master-0 kubenswrapper[19715]: I0313 12:58:46.909284 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"a814bd60de133d95cf99630a978c017e","Type":"ContainerStarted","Data":"1a7ad29afe0f4bddb60767c573277ec039e2bd89ae47ef9df3212f9f93ebef99"} Mar 13 12:58:46.912113 master-0 kubenswrapper[19715]: I0313 12:58:46.912011 19715 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:46.912231 master-0 kubenswrapper[19715]: E0313 12:58:46.912156 19715 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:58:46.916294 master-0 kubenswrapper[19715]: I0313 12:58:46.916232 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 12:58:46.918225 master-0 kubenswrapper[19715]: I0313 12:58:46.918128 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641" exitCode=0 Mar 13 12:58:46.918225 master-0 kubenswrapper[19715]: I0313 12:58:46.918201 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a" exitCode=0 Mar 13 12:58:46.918225 master-0 kubenswrapper[19715]: I0313 12:58:46.918214 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556" exitCode=0 Mar 13 12:58:46.918225 master-0 kubenswrapper[19715]: I0313 12:58:46.918225 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260" exitCode=2 Mar 13 12:58:46.922415 master-0 kubenswrapper[19715]: I0313 12:58:46.922333 19715 generic.go:334] "Generic (PLEG): container finished" podID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" containerID="fc1b321d58cf5448a6ee13171e2d4356e254c375b3ecb568a421fdcb18558d72" exitCode=0 Mar 13 12:58:46.922848 master-0 kubenswrapper[19715]: I0313 12:58:46.922423 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7c65fa87-b404-4e2d-b730-d8e3ae5a0990","Type":"ContainerDied","Data":"fc1b321d58cf5448a6ee13171e2d4356e254c375b3ecb568a421fdcb18558d72"} Mar 13 12:58:46.925102 master-0 kubenswrapper[19715]: I0313 12:58:46.924960 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:46.926602 master-0 kubenswrapper[19715]: I0313 12:58:46.926438 19715 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.381529 master-0 kubenswrapper[19715]: I0313 12:58:48.381477 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:58:48.393732 master-0 kubenswrapper[19715]: I0313 12:58:48.393580 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.473543 master-0 kubenswrapper[19715]: I0313 12:58:48.473446 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir\") pod \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.473714 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access\") pod \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.473730 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7c65fa87-b404-4e2d-b730-d8e3ae5a0990" (UID: "7c65fa87-b404-4e2d-b730-d8e3ae5a0990"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.473971 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock\") pod \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\" (UID: \"7c65fa87-b404-4e2d-b730-d8e3ae5a0990\") " Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.474029 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock" (OuterVolumeSpecName: "var-lock") pod "7c65fa87-b404-4e2d-b730-d8e3ae5a0990" (UID: "7c65fa87-b404-4e2d-b730-d8e3ae5a0990"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.474570 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.474793 master-0 kubenswrapper[19715]: I0313 12:58:48.474611 19715 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.478544 master-0 kubenswrapper[19715]: I0313 12:58:48.478435 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7c65fa87-b404-4e2d-b730-d8e3ae5a0990" (UID: "7c65fa87-b404-4e2d-b730-d8e3ae5a0990"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 12:58:48.575990 master-0 kubenswrapper[19715]: I0313 12:58:48.575867 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7c65fa87-b404-4e2d-b730-d8e3ae5a0990-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.674739 master-0 kubenswrapper[19715]: I0313 12:58:48.674668 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 12:58:48.676852 master-0 kubenswrapper[19715]: I0313 12:58:48.676779 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:48.678347 master-0 kubenswrapper[19715]: I0313 12:58:48.678288 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.679252 master-0 kubenswrapper[19715]: I0313 12:58:48.679174 19715 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.779648 master-0 kubenswrapper[19715]: I0313 12:58:48.779555 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 12:58:48.780056 master-0 kubenswrapper[19715]: I0313 12:58:48.780038 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 12:58:48.780185 master-0 kubenswrapper[19715]: I0313 12:58:48.780171 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"077dd10388b9e3e48a07382126e86621\" (UID: \"077dd10388b9e3e48a07382126e86621\") " Mar 13 12:58:48.780492 master-0 kubenswrapper[19715]: I0313 12:58:48.779830 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:48.780571 master-0 kubenswrapper[19715]: I0313 12:58:48.780126 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:48.780571 master-0 kubenswrapper[19715]: I0313 12:58:48.780334 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "077dd10388b9e3e48a07382126e86621" (UID: "077dd10388b9e3e48a07382126e86621"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:58:48.781276 master-0 kubenswrapper[19715]: I0313 12:58:48.781163 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.781276 master-0 kubenswrapper[19715]: I0313 12:58:48.781270 19715 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.781382 master-0 kubenswrapper[19715]: I0313 12:58:48.781293 19715 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:58:48.944915 master-0 kubenswrapper[19715]: I0313 12:58:48.944835 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-5-master-0" Mar 13 12:58:48.945265 master-0 kubenswrapper[19715]: I0313 12:58:48.945074 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-5-master-0" event={"ID":"7c65fa87-b404-4e2d-b730-d8e3ae5a0990","Type":"ContainerDied","Data":"af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e"} Mar 13 12:58:48.945265 master-0 kubenswrapper[19715]: I0313 12:58:48.945180 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="af724f33c14cdd890b209c532978e4212183c33b20822bd991af2cf66bcf723e" Mar 13 12:58:48.949343 master-0 kubenswrapper[19715]: I0313 12:58:48.949310 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 12:58:48.950365 master-0 kubenswrapper[19715]: I0313 12:58:48.950330 19715 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175" exitCode=0 Mar 13 12:58:48.950521 master-0 kubenswrapper[19715]: I0313 12:58:48.950473 19715 scope.go:117] "RemoveContainer" containerID="80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641" Mar 13 12:58:48.950613 master-0 kubenswrapper[19715]: I0313 12:58:48.950495 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:58:48.975269 master-0 kubenswrapper[19715]: I0313 12:58:48.975142 19715 scope.go:117] "RemoveContainer" containerID="c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a" Mar 13 12:58:48.986185 master-0 kubenswrapper[19715]: I0313 12:58:48.985925 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.987606 master-0 kubenswrapper[19715]: I0313 12:58:48.987514 19715 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.988485 master-0 kubenswrapper[19715]: I0313 12:58:48.988423 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:48.989771 master-0 kubenswrapper[19715]: I0313 12:58:48.989704 19715 status_manager.go:851] "Failed to get status for pod" podUID="077dd10388b9e3e48a07382126e86621" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:49.001301 master-0 kubenswrapper[19715]: I0313 12:58:49.001250 19715 scope.go:117] "RemoveContainer" containerID="9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556" Mar 13 12:58:49.021143 master-0 kubenswrapper[19715]: I0313 12:58:49.021097 19715 scope.go:117] "RemoveContainer" containerID="d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260" Mar 13 12:58:49.049235 master-0 kubenswrapper[19715]: I0313 12:58:49.049170 19715 scope.go:117] "RemoveContainer" containerID="658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175" Mar 13 12:58:49.070945 master-0 kubenswrapper[19715]: I0313 12:58:49.070883 19715 scope.go:117] "RemoveContainer" containerID="605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc" Mar 13 12:58:49.097317 master-0 kubenswrapper[19715]: I0313 12:58:49.097260 19715 scope.go:117] "RemoveContainer" containerID="80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641" Mar 13 12:58:49.098456 master-0 kubenswrapper[19715]: E0313 12:58:49.098186 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641\": container with ID starting with 80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641 not found: ID does not exist" containerID="80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641" Mar 13 12:58:49.098456 master-0 kubenswrapper[19715]: I0313 12:58:49.098254 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641"} err="failed to get container status \"80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641\": rpc error: code = NotFound desc = could not find container \"80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641\": container with ID starting with 80d40329f8299dd87ab5238cb3393db0ab0773e03f267c4e04b00b9a4bee7641 not found: ID does not exist" Mar 13 12:58:49.098456 master-0 kubenswrapper[19715]: I0313 12:58:49.098299 19715 scope.go:117] "RemoveContainer" containerID="c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a" Mar 13 12:58:49.099243 master-0 kubenswrapper[19715]: E0313 12:58:49.099182 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a\": container with ID starting with c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a not found: ID does not exist" containerID="c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a" Mar 13 12:58:49.099309 master-0 kubenswrapper[19715]: I0313 12:58:49.099264 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a"} err="failed to get container status \"c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a\": rpc error: code = NotFound desc = could not find container \"c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a\": container with ID starting with c08f77eb8111a0f904c7f2c94c2dabe1f9afe1fbecf5f00b211e07cec905701a not found: ID does not exist" Mar 13 12:58:49.099352 master-0 kubenswrapper[19715]: I0313 12:58:49.099333 19715 scope.go:117] "RemoveContainer" containerID="9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556" Mar 13 12:58:49.099963 master-0 kubenswrapper[19715]: E0313 12:58:49.099812 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556\": container with ID starting with 9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556 not found: ID does not exist" containerID="9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556" Mar 13 12:58:49.099963 master-0 kubenswrapper[19715]: I0313 12:58:49.099846 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556"} err="failed to get container status \"9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556\": rpc error: code = NotFound desc = could not find container \"9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556\": container with ID starting with 9902e512caf6f618c1c8ef040a3a87808d27c1bbd52d64592466e391df1cb556 not found: ID does not exist" Mar 13 12:58:49.099963 master-0 kubenswrapper[19715]: I0313 12:58:49.099869 19715 scope.go:117] "RemoveContainer" containerID="d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260" Mar 13 12:58:49.100493 master-0 kubenswrapper[19715]: E0313 12:58:49.100434 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260\": container with ID starting with d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260 not found: ID does not exist" containerID="d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260" Mar 13 12:58:49.100493 master-0 kubenswrapper[19715]: I0313 12:58:49.100475 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260"} err="failed to get container status \"d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260\": rpc error: code = NotFound desc = could not find container \"d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260\": container with ID starting with d1715358ec51dcca84fe93c9c1aab3ae350d8963fbeff64fa6f9883fb3b38260 not found: ID does not exist" Mar 13 12:58:49.100625 master-0 kubenswrapper[19715]: I0313 12:58:49.100500 19715 scope.go:117] "RemoveContainer" containerID="658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175" Mar 13 12:58:49.102138 master-0 kubenswrapper[19715]: E0313 12:58:49.102021 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175\": container with ID starting with 658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175 not found: ID does not exist" containerID="658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175" Mar 13 12:58:49.102138 master-0 kubenswrapper[19715]: I0313 12:58:49.102049 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175"} err="failed to get container status \"658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175\": rpc error: code = NotFound desc = could not find container \"658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175\": container with ID starting with 658ea29c7a29ddb112ac973ac759af5744f10e904a62c5889234f7fde3fc8175 not found: ID does not exist" Mar 13 12:58:49.102138 master-0 kubenswrapper[19715]: I0313 12:58:49.102064 19715 scope.go:117] "RemoveContainer" containerID="605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc" Mar 13 12:58:49.102476 master-0 kubenswrapper[19715]: E0313 12:58:49.102434 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc\": container with ID starting with 605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc not found: ID does not exist" containerID="605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc" Mar 13 12:58:49.102521 master-0 kubenswrapper[19715]: I0313 12:58:49.102476 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc"} err="failed to get container status \"605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc\": rpc error: code = NotFound desc = could not find container \"605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc\": container with ID starting with 605b4b561ea4796ebc73564d4101c5b7c8d151895c711b5d2cead4ee730e06cc not found: ID does not exist" Mar 13 12:58:49.418596 master-0 kubenswrapper[19715]: E0313 12:58:49.418145 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c68058b150b44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,LastTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:58:49.726147 master-0 kubenswrapper[19715]: I0313 12:58:49.725948 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077dd10388b9e3e48a07382126e86621" path="/var/lib/kubelet/pods/077dd10388b9e3e48a07382126e86621/volumes" Mar 13 12:58:51.001939 master-0 kubenswrapper[19715]: E0313 12:58:51.001832 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:51.002810 master-0 kubenswrapper[19715]: E0313 12:58:51.002748 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:51.004599 master-0 kubenswrapper[19715]: E0313 12:58:51.004499 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:51.005918 master-0 kubenswrapper[19715]: E0313 12:58:51.005850 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:51.006835 master-0 kubenswrapper[19715]: E0313 12:58:51.006794 19715 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:51.006835 master-0 kubenswrapper[19715]: I0313 12:58:51.006832 19715 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 12:58:51.008077 master-0 kubenswrapper[19715]: E0313 12:58:51.007995 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 12:58:51.209882 master-0 kubenswrapper[19715]: E0313 12:58:51.209807 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 12:58:51.611630 master-0 kubenswrapper[19715]: E0313 12:58:51.610992 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 12:58:51.810127 master-0 kubenswrapper[19715]: I0313 12:58:51.808297 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:58:51.810127 master-0 kubenswrapper[19715]: I0313 12:58:51.808384 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:58:52.412273 master-0 kubenswrapper[19715]: E0313 12:58:52.412146 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 12:58:53.703202 master-0 kubenswrapper[19715]: I0313 12:58:53.703003 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:58:54.015321 master-0 kubenswrapper[19715]: E0313 12:58:54.014940 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 12:58:56.336449 master-0 kubenswrapper[19715]: I0313 12:58:56.336353 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:58:56.337323 master-0 kubenswrapper[19715]: I0313 12:58:56.336453 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:58:57.217101 master-0 kubenswrapper[19715]: E0313 12:58:57.217004 19715 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 12:58:59.422524 master-0 kubenswrapper[19715]: E0313 12:58:59.422167 19715 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-master-0.189c68058b150b44 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-master-0,UID:a814bd60de133d95cf99630a978c017e,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,LastTimestamp:2026-03-13 12:58:46.294670148 +0000 UTC m=+552.861342905,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 12:59:00.059352 master-0 kubenswrapper[19715]: I0313 12:59:00.059260 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/0.log" Mar 13 12:59:00.059352 master-0 kubenswrapper[19715]: I0313 12:59:00.059349 19715 generic.go:334] "Generic (PLEG): container finished" podID="a38f6c36de78a5cb446093c52f21a20d" containerID="f357e81908ad10f5e64bb39fb55781d86cbe402be9ed1b0dcf2bf8216bedd524" exitCode=1 Mar 13 12:59:00.059911 master-0 kubenswrapper[19715]: I0313 12:59:00.059400 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerDied","Data":"f357e81908ad10f5e64bb39fb55781d86cbe402be9ed1b0dcf2bf8216bedd524"} Mar 13 12:59:00.060105 master-0 kubenswrapper[19715]: I0313 12:59:00.060063 19715 scope.go:117] "RemoveContainer" containerID="f357e81908ad10f5e64bb39fb55781d86cbe402be9ed1b0dcf2bf8216bedd524" Mar 13 12:59:00.062138 master-0 kubenswrapper[19715]: I0313 12:59:00.062113 19715 status_manager.go:851] "Failed to get status for pod" podUID="a38f6c36de78a5cb446093c52f21a20d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:00.063497 master-0 kubenswrapper[19715]: I0313 12:59:00.063350 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:01.071887 master-0 kubenswrapper[19715]: I0313 12:59:01.071798 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/0.log" Mar 13 12:59:01.071887 master-0 kubenswrapper[19715]: I0313 12:59:01.071881 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"7ad9a1cca6a8d8e8455dfe6d82a9c3bc49497968cec1a56ae1c60cb085816adf"} Mar 13 12:59:01.074543 master-0 kubenswrapper[19715]: I0313 12:59:01.074426 19715 status_manager.go:851] "Failed to get status for pod" podUID="a38f6c36de78a5cb446093c52f21a20d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:01.075476 master-0 kubenswrapper[19715]: I0313 12:59:01.075398 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:01.696012 master-0 kubenswrapper[19715]: I0313 12:59:01.695920 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:01.697820 master-0 kubenswrapper[19715]: I0313 12:59:01.697737 19715 status_manager.go:851] "Failed to get status for pod" podUID="a38f6c36de78a5cb446093c52f21a20d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:01.698862 master-0 kubenswrapper[19715]: I0313 12:59:01.698775 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:01.718374 master-0 kubenswrapper[19715]: I0313 12:59:01.718290 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:01.718374 master-0 kubenswrapper[19715]: I0313 12:59:01.718360 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:01.721604 master-0 kubenswrapper[19715]: E0313 12:59:01.721497 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:01.722765 master-0 kubenswrapper[19715]: I0313 12:59:01.722725 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:01.763835 master-0 kubenswrapper[19715]: W0313 12:59:01.763465 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod36d4251d3504cdc0ec85144c1379056c.slice/crio-c0a861eafa7dc22256426bfbbedbf6ab4816d6264efaff36f1ccf96e1c10ec66 WatchSource:0}: Error finding container c0a861eafa7dc22256426bfbbedbf6ab4816d6264efaff36f1ccf96e1c10ec66: Status 404 returned error can't find the container with id c0a861eafa7dc22256426bfbbedbf6ab4816d6264efaff36f1ccf96e1c10ec66 Mar 13 12:59:01.775507 master-0 kubenswrapper[19715]: I0313 12:59:01.775443 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:59:01.775742 master-0 kubenswrapper[19715]: I0313 12:59:01.775513 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:59:01.775742 master-0 kubenswrapper[19715]: I0313 12:59:01.775630 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:59:01.776390 master-0 kubenswrapper[19715]: I0313 12:59:01.776348 19715 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="console" containerStatusID={"Type":"cri-o","ID":"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af"} pod="openshift-console/console-7fdf5454d9-tzhsm" containerMessage="Container console failed startup probe, will be restarted" Mar 13 12:59:02.085278 master-0 kubenswrapper[19715]: I0313 12:59:02.085205 19715 generic.go:334] "Generic (PLEG): container finished" podID="36d4251d3504cdc0ec85144c1379056c" containerID="a6d426ef961e119fb267eff169a6fd5d48c7df0646c8bd860b7265a6dc48d54b" exitCode=0 Mar 13 12:59:02.085943 master-0 kubenswrapper[19715]: I0313 12:59:02.085287 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerDied","Data":"a6d426ef961e119fb267eff169a6fd5d48c7df0646c8bd860b7265a6dc48d54b"} Mar 13 12:59:02.085943 master-0 kubenswrapper[19715]: I0313 12:59:02.085346 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"c0a861eafa7dc22256426bfbbedbf6ab4816d6264efaff36f1ccf96e1c10ec66"} Mar 13 12:59:02.085943 master-0 kubenswrapper[19715]: I0313 12:59:02.085744 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:02.085943 master-0 kubenswrapper[19715]: I0313 12:59:02.085763 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:02.087354 master-0 kubenswrapper[19715]: I0313 12:59:02.087215 19715 status_manager.go:851] "Failed to get status for pod" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" pod="openshift-kube-apiserver/installer-5-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-5-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:02.087448 master-0 kubenswrapper[19715]: E0313 12:59:02.087325 19715 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:02.088207 master-0 kubenswrapper[19715]: I0313 12:59:02.088150 19715 status_manager.go:851] "Failed to get status for pod" podUID="a38f6c36de78a5cb446093c52f21a20d" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 12:59:02.551501 master-0 kubenswrapper[19715]: E0313 12:59:02.551426 19715 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err="command 'sleep 25' exited with 137: " execCommand=["sleep","25"] containerName="console" pod="openshift-console/console-7fdf5454d9-tzhsm" message="" Mar 13 12:59:02.551671 master-0 kubenswrapper[19715]: E0313 12:59:02.551504 19715 kuberuntime_container.go:691] "PreStop hook failed" err="command 'sleep 25' exited with 137: " pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" containerID="cri-o://3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" Mar 13 12:59:02.551671 master-0 kubenswrapper[19715]: I0313 12:59:02.551608 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" containerID="cri-o://3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" gracePeriod=40 Mar 13 12:59:03.100431 master-0 kubenswrapper[19715]: I0313 12:59:03.100344 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"b9eabbc3f8b731e08f7a3551fda0122dae229418bfac45db3d66948c036e675c"} Mar 13 12:59:03.100431 master-0 kubenswrapper[19715]: I0313 12:59:03.100428 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"d39343fb40648519126a00dded89cf4e8439c83794c5dc25091d4c6c1dc2a12f"} Mar 13 12:59:03.104249 master-0 kubenswrapper[19715]: I0313 12:59:03.104183 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fdf5454d9-tzhsm_897ba022-a904-4e2f-9317-d675122727fd/console/0.log" Mar 13 12:59:03.104483 master-0 kubenswrapper[19715]: I0313 12:59:03.104449 19715 generic.go:334] "Generic (PLEG): container finished" podID="897ba022-a904-4e2f-9317-d675122727fd" containerID="3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" exitCode=255 Mar 13 12:59:03.104693 master-0 kubenswrapper[19715]: I0313 12:59:03.104647 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerDied","Data":"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af"} Mar 13 12:59:03.104939 master-0 kubenswrapper[19715]: I0313 12:59:03.104915 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerStarted","Data":"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856"} Mar 13 12:59:03.883848 master-0 kubenswrapper[19715]: I0313 12:59:03.883773 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:59:03.883848 master-0 kubenswrapper[19715]: I0313 12:59:03.883843 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:59:03.884072 master-0 kubenswrapper[19715]: I0313 12:59:03.883939 19715 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 12:59:03.884072 master-0 kubenswrapper[19715]: I0313 12:59:03.884018 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a38f6c36de78a5cb446093c52f21a20d" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:59:04.122882 master-0 kubenswrapper[19715]: I0313 12:59:04.122791 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"9b716aea7d7fd28320fa1a2b91a934a8c64a098482af1f33147db968917b0cb2"} Mar 13 12:59:04.122882 master-0 kubenswrapper[19715]: I0313 12:59:04.122876 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"ec244d68dcb2d06414d3ed0b1bb561cd31cc5d47e597091669963038e577447d"} Mar 13 12:59:04.122882 master-0 kubenswrapper[19715]: I0313 12:59:04.122892 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"36d4251d3504cdc0ec85144c1379056c","Type":"ContainerStarted","Data":"5c4d01c5165a415d194f60cc7878dcdddb06250c1c57c8787072a191adf5916a"} Mar 13 12:59:04.123798 master-0 kubenswrapper[19715]: I0313 12:59:04.123424 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:04.123798 master-0 kubenswrapper[19715]: I0313 12:59:04.123475 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:06.336274 master-0 kubenswrapper[19715]: I0313 12:59:06.336175 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:59:06.337199 master-0 kubenswrapper[19715]: I0313 12:59:06.336310 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:59:06.723154 master-0 kubenswrapper[19715]: I0313 12:59:06.723074 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:06.723154 master-0 kubenswrapper[19715]: I0313 12:59:06.723170 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:06.731283 master-0 kubenswrapper[19715]: I0313 12:59:06.731186 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:09.908481 master-0 kubenswrapper[19715]: I0313 12:59:09.906310 19715 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:10.022082 master-0 kubenswrapper[19715]: I0313 12:59:10.022006 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="eacedae0-a769-432c-a239-93401fcec4b9" Mar 13 12:59:10.301186 master-0 kubenswrapper[19715]: I0313 12:59:10.301030 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:10.301186 master-0 kubenswrapper[19715]: I0313 12:59:10.301127 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:10.301186 master-0 kubenswrapper[19715]: I0313 12:59:10.301167 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:11.309787 master-0 kubenswrapper[19715]: I0313 12:59:11.309558 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:11.309787 master-0 kubenswrapper[19715]: I0313 12:59:11.309764 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:11.324895 master-0 kubenswrapper[19715]: I0313 12:59:11.317505 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:11.886341 master-0 kubenswrapper[19715]: I0313 12:59:11.881458 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:59:11.886341 master-0 kubenswrapper[19715]: I0313 12:59:11.881520 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:59:11.919724 master-0 kubenswrapper[19715]: I0313 12:59:11.911988 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:59:11.919724 master-0 kubenswrapper[19715]: I0313 12:59:11.912057 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:59:12.324919 master-0 kubenswrapper[19715]: I0313 12:59:12.324681 19715 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:12.326043 master-0 kubenswrapper[19715]: I0313 12:59:12.326010 19715 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="5048c963-d379-4c76-a87d-a00bff1b215d" Mar 13 12:59:13.715717 master-0 kubenswrapper[19715]: I0313 12:59:13.715652 19715 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="36d4251d3504cdc0ec85144c1379056c" podUID="eacedae0-a769-432c-a239-93401fcec4b9" Mar 13 12:59:13.771222 master-0 kubenswrapper[19715]: I0313 12:59:13.770596 19715 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 12:59:13.771222 master-0 kubenswrapper[19715]: I0313 12:59:13.770683 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a38f6c36de78a5cb446093c52f21a20d" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:59:16.336785 master-0 kubenswrapper[19715]: I0313 12:59:16.336680 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:59:16.337876 master-0 kubenswrapper[19715]: I0313 12:59:16.337498 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:59:19.664497 master-0 kubenswrapper[19715]: I0313 12:59:19.664201 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 12:59:19.687642 master-0 kubenswrapper[19715]: I0313 12:59:19.687551 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 12:59:20.433838 master-0 kubenswrapper[19715]: I0313 12:59:20.433751 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 12:59:20.977721 master-0 kubenswrapper[19715]: I0313 12:59:20.977629 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:59:21.222665 master-0 kubenswrapper[19715]: I0313 12:59:21.222554 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 12:59:21.230848 master-0 kubenswrapper[19715]: I0313 12:59:21.230692 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 12:59:21.319269 master-0 kubenswrapper[19715]: I0313 12:59:21.319191 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 12:59:21.411068 master-0 kubenswrapper[19715]: I0313 12:59:21.410982 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 12:59:21.457224 master-0 kubenswrapper[19715]: I0313 12:59:21.457139 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 12:59:21.467150 master-0 kubenswrapper[19715]: I0313 12:59:21.467083 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:59:21.645655 master-0 kubenswrapper[19715]: I0313 12:59:21.645553 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 12:59:21.656787 master-0 kubenswrapper[19715]: I0313 12:59:21.656743 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 12:59:21.738758 master-0 kubenswrapper[19715]: I0313 12:59:21.738379 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 12:59:21.777042 master-0 kubenswrapper[19715]: I0313 12:59:21.776816 19715 patch_prober.go:28] interesting pod/console-7fdf5454d9-tzhsm container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 12:59:21.777042 master-0 kubenswrapper[19715]: I0313 12:59:21.776937 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 12:59:21.787677 master-0 kubenswrapper[19715]: I0313 12:59:21.787350 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 12:59:21.816509 master-0 kubenswrapper[19715]: I0313 12:59:21.816435 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 12:59:21.821168 master-0 kubenswrapper[19715]: I0313 12:59:21.821087 19715 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 12:59:21.837822 master-0 kubenswrapper[19715]: I0313 12:59:21.837725 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:59:21.837822 master-0 kubenswrapper[19715]: I0313 12:59:21.837835 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 12:59:21.844058 master-0 kubenswrapper[19715]: I0313 12:59:21.843974 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 12:59:21.882084 master-0 kubenswrapper[19715]: I0313 12:59:21.881787 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=12.881732059 podStartE2EDuration="12.881732059s" podCreationTimestamp="2026-03-13 12:59:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 12:59:21.869206571 +0000 UTC m=+588.435879328" watchObservedRunningTime="2026-03-13 12:59:21.881732059 +0000 UTC m=+588.448404816" Mar 13 12:59:21.958836 master-0 kubenswrapper[19715]: I0313 12:59:21.958642 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 12:59:22.063645 master-0 kubenswrapper[19715]: I0313 12:59:22.062280 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 12:59:22.097741 master-0 kubenswrapper[19715]: I0313 12:59:22.097561 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 12:59:22.129845 master-0 kubenswrapper[19715]: I0313 12:59:22.129732 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 12:59:22.227484 master-0 kubenswrapper[19715]: I0313 12:59:22.227302 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 12:59:22.262687 master-0 kubenswrapper[19715]: I0313 12:59:22.262567 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 12:59:22.270357 master-0 kubenswrapper[19715]: I0313 12:59:22.270275 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 12:59:22.286802 master-0 kubenswrapper[19715]: I0313 12:59:22.286709 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 12:59:22.363858 master-0 kubenswrapper[19715]: I0313 12:59:22.363747 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 12:59:22.657395 master-0 kubenswrapper[19715]: I0313 12:59:22.657299 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 12:59:22.665589 master-0 kubenswrapper[19715]: I0313 12:59:22.665391 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-84k2dnesbumig" Mar 13 12:59:22.707276 master-0 kubenswrapper[19715]: I0313 12:59:22.707180 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 12:59:22.785550 master-0 kubenswrapper[19715]: I0313 12:59:22.785468 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 12:59:22.788011 master-0 kubenswrapper[19715]: I0313 12:59:22.787986 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-lcdwj" Mar 13 12:59:22.911730 master-0 kubenswrapper[19715]: I0313 12:59:22.911152 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 12:59:22.911730 master-0 kubenswrapper[19715]: I0313 12:59:22.911444 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 12:59:22.912145 master-0 kubenswrapper[19715]: I0313 12:59:22.911772 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 12:59:22.929993 master-0 kubenswrapper[19715]: I0313 12:59:22.921243 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 12:59:22.936768 master-0 kubenswrapper[19715]: I0313 12:59:22.933083 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 12:59:22.950612 master-0 kubenswrapper[19715]: I0313 12:59:22.950483 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 12:59:23.031628 master-0 kubenswrapper[19715]: I0313 12:59:23.026991 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 12:59:23.219911 master-0 kubenswrapper[19715]: I0313 12:59:23.219747 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 12:59:23.295779 master-0 kubenswrapper[19715]: I0313 12:59:23.295676 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-wsv7b" Mar 13 12:59:23.313120 master-0 kubenswrapper[19715]: I0313 12:59:23.313043 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 12:59:23.356988 master-0 kubenswrapper[19715]: I0313 12:59:23.356893 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 12:59:23.454845 master-0 kubenswrapper[19715]: I0313 12:59:23.454773 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 12:59:23.527323 master-0 kubenswrapper[19715]: I0313 12:59:23.527072 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 12:59:23.548227 master-0 kubenswrapper[19715]: I0313 12:59:23.548132 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4f2vw" Mar 13 12:59:23.610887 master-0 kubenswrapper[19715]: I0313 12:59:23.610779 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 12:59:23.615633 master-0 kubenswrapper[19715]: I0313 12:59:23.613932 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 12:59:23.631902 master-0 kubenswrapper[19715]: I0313 12:59:23.631812 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 12:59:23.678001 master-0 kubenswrapper[19715]: I0313 12:59:23.677906 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 12:59:23.770142 master-0 kubenswrapper[19715]: I0313 12:59:23.770053 19715 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 12:59:23.770685 master-0 kubenswrapper[19715]: I0313 12:59:23.770626 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a38f6c36de78a5cb446093c52f21a20d" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 12:59:23.770877 master-0 kubenswrapper[19715]: I0313 12:59:23.770855 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 12:59:23.772297 master-0 kubenswrapper[19715]: I0313 12:59:23.772261 19715 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"7ad9a1cca6a8d8e8455dfe6d82a9c3bc49497968cec1a56ae1c60cb085816adf"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 12:59:23.772610 master-0 kubenswrapper[19715]: I0313 12:59:23.772563 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="a38f6c36de78a5cb446093c52f21a20d" containerName="kube-controller-manager" containerID="cri-o://7ad9a1cca6a8d8e8455dfe6d82a9c3bc49497968cec1a56ae1c60cb085816adf" gracePeriod=30 Mar 13 12:59:23.789895 master-0 kubenswrapper[19715]: I0313 12:59:23.789734 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 12:59:23.809036 master-0 kubenswrapper[19715]: I0313 12:59:23.808959 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-zzprz" Mar 13 12:59:23.819504 master-0 kubenswrapper[19715]: I0313 12:59:23.819446 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 12:59:23.841018 master-0 kubenswrapper[19715]: I0313 12:59:23.840928 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-dh2b7" Mar 13 12:59:23.878413 master-0 kubenswrapper[19715]: I0313 12:59:23.878337 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 12:59:23.956420 master-0 kubenswrapper[19715]: I0313 12:59:23.956321 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 12:59:23.991069 master-0 kubenswrapper[19715]: I0313 12:59:23.990987 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 12:59:24.072824 master-0 kubenswrapper[19715]: I0313 12:59:24.072623 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 12:59:24.148737 master-0 kubenswrapper[19715]: I0313 12:59:24.148668 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 12:59:24.149215 master-0 kubenswrapper[19715]: I0313 12:59:24.148827 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 12:59:24.150943 master-0 kubenswrapper[19715]: I0313 12:59:24.150919 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 12:59:24.218187 master-0 kubenswrapper[19715]: I0313 12:59:24.218093 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 12:59:24.275613 master-0 kubenswrapper[19715]: I0313 12:59:24.275513 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-zllxz" Mar 13 12:59:24.298025 master-0 kubenswrapper[19715]: I0313 12:59:24.297939 19715 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 12:59:24.304317 master-0 kubenswrapper[19715]: I0313 12:59:24.304256 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-dockercfg-g589p" Mar 13 12:59:24.329906 master-0 kubenswrapper[19715]: I0313 12:59:24.329678 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 12:59:24.352133 master-0 kubenswrapper[19715]: I0313 12:59:24.352021 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 12:59:24.465701 master-0 kubenswrapper[19715]: I0313 12:59:24.465631 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 12:59:24.478994 master-0 kubenswrapper[19715]: I0313 12:59:24.478952 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 12:59:24.555371 master-0 kubenswrapper[19715]: I0313 12:59:24.555307 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 12:59:24.649355 master-0 kubenswrapper[19715]: I0313 12:59:24.649252 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 12:59:24.653031 master-0 kubenswrapper[19715]: I0313 12:59:24.653010 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 12:59:24.754476 master-0 kubenswrapper[19715]: I0313 12:59:24.754418 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-dockercfg-p8xg8" Mar 13 12:59:24.757945 master-0 kubenswrapper[19715]: I0313 12:59:24.757909 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 12:59:24.774294 master-0 kubenswrapper[19715]: I0313 12:59:24.774244 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 12:59:24.787624 master-0 kubenswrapper[19715]: I0313 12:59:24.787516 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 12:59:25.368704 master-0 kubenswrapper[19715]: I0313 12:59:25.368624 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 12:59:25.380338 master-0 kubenswrapper[19715]: I0313 12:59:25.380268 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 12:59:25.380665 master-0 kubenswrapper[19715]: I0313 12:59:25.380371 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 12:59:25.380738 master-0 kubenswrapper[19715]: I0313 12:59:25.380715 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 12:59:25.385668 master-0 kubenswrapper[19715]: I0313 12:59:25.384815 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 12:59:25.388392 master-0 kubenswrapper[19715]: I0313 12:59:25.387312 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 12:59:25.390158 master-0 kubenswrapper[19715]: I0313 12:59:25.390088 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 12:59:25.403256 master-0 kubenswrapper[19715]: I0313 12:59:25.402322 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 12:59:25.567489 master-0 kubenswrapper[19715]: I0313 12:59:25.567344 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 12:59:25.610616 master-0 kubenswrapper[19715]: I0313 12:59:25.609727 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 12:59:25.627812 master-0 kubenswrapper[19715]: I0313 12:59:25.626968 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 12:59:25.671456 master-0 kubenswrapper[19715]: I0313 12:59:25.671369 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 12:59:25.740671 master-0 kubenswrapper[19715]: I0313 12:59:25.740569 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 12:59:25.748668 master-0 kubenswrapper[19715]: I0313 12:59:25.748547 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 12:59:25.803704 master-0 kubenswrapper[19715]: I0313 12:59:25.803620 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 12:59:25.859374 master-0 kubenswrapper[19715]: I0313 12:59:25.859287 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 12:59:25.919082 master-0 kubenswrapper[19715]: I0313 12:59:25.918945 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 12:59:25.946020 master-0 kubenswrapper[19715]: I0313 12:59:25.945870 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 12:59:25.951709 master-0 kubenswrapper[19715]: I0313 12:59:25.951658 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 12:59:25.957531 master-0 kubenswrapper[19715]: I0313 12:59:25.956140 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 12:59:25.994091 master-0 kubenswrapper[19715]: I0313 12:59:25.990329 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 12:59:25.994091 master-0 kubenswrapper[19715]: I0313 12:59:25.990720 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 12:59:26.010228 master-0 kubenswrapper[19715]: I0313 12:59:26.010175 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 12:59:26.058172 master-0 kubenswrapper[19715]: I0313 12:59:26.058057 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 12:59:26.089729 master-0 kubenswrapper[19715]: I0313 12:59:26.089642 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 12:59:26.175564 master-0 kubenswrapper[19715]: I0313 12:59:26.174757 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-mr4r4" Mar 13 12:59:26.188102 master-0 kubenswrapper[19715]: I0313 12:59:26.188020 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 12:59:26.198571 master-0 kubenswrapper[19715]: I0313 12:59:26.198488 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 12:59:26.226325 master-0 kubenswrapper[19715]: I0313 12:59:26.226259 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 12:59:26.274496 master-0 kubenswrapper[19715]: I0313 12:59:26.274431 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 12:59:26.284628 master-0 kubenswrapper[19715]: I0313 12:59:26.284408 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 12:59:26.320437 master-0 kubenswrapper[19715]: I0313 12:59:26.320340 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 12:59:26.336733 master-0 kubenswrapper[19715]: I0313 12:59:26.336652 19715 patch_prober.go:28] interesting pod/console-79d876f4d6-kqmws container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" start-of-body= Mar 13 12:59:26.337312 master-0 kubenswrapper[19715]: I0313 12:59:26.337263 19715 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" probeResult="failure" output="Get \"https://10.128.0.99:8443/health\": dial tcp 10.128.0.99:8443: connect: connection refused" Mar 13 12:59:26.366672 master-0 kubenswrapper[19715]: I0313 12:59:26.366570 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 12:59:26.382435 master-0 kubenswrapper[19715]: I0313 12:59:26.382345 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 12:59:26.443536 master-0 kubenswrapper[19715]: I0313 12:59:26.443360 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 12:59:26.471383 master-0 kubenswrapper[19715]: I0313 12:59:26.471271 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 12:59:26.501795 master-0 kubenswrapper[19715]: I0313 12:59:26.501724 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 12:59:26.507437 master-0 kubenswrapper[19715]: I0313 12:59:26.507383 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 12:59:26.522453 master-0 kubenswrapper[19715]: I0313 12:59:26.522392 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-2tphk" Mar 13 12:59:26.532822 master-0 kubenswrapper[19715]: I0313 12:59:26.532771 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 12:59:26.608472 master-0 kubenswrapper[19715]: I0313 12:59:26.608358 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 12:59:26.631294 master-0 kubenswrapper[19715]: I0313 12:59:26.631208 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 12:59:26.680390 master-0 kubenswrapper[19715]: I0313 12:59:26.680292 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 12:59:26.737667 master-0 kubenswrapper[19715]: I0313 12:59:26.737473 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-fbzjs" Mar 13 12:59:26.786745 master-0 kubenswrapper[19715]: I0313 12:59:26.786629 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:59:26.856428 master-0 kubenswrapper[19715]: I0313 12:59:26.856323 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 12:59:26.997040 master-0 kubenswrapper[19715]: I0313 12:59:26.996800 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 12:59:27.016205 master-0 kubenswrapper[19715]: I0313 12:59:27.016136 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:59:27.040471 master-0 kubenswrapper[19715]: I0313 12:59:27.040371 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 12:59:27.052249 master-0 kubenswrapper[19715]: I0313 12:59:27.052202 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 12:59:27.090763 master-0 kubenswrapper[19715]: I0313 12:59:27.090696 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 12:59:27.101656 master-0 kubenswrapper[19715]: I0313 12:59:27.101608 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 12:59:27.197869 master-0 kubenswrapper[19715]: I0313 12:59:27.197770 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-dockercfg-5lcmq" Mar 13 12:59:27.226423 master-0 kubenswrapper[19715]: I0313 12:59:27.226140 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 12:59:27.254613 master-0 kubenswrapper[19715]: I0313 12:59:27.254377 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 12:59:27.275292 master-0 kubenswrapper[19715]: I0313 12:59:27.275230 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 12:59:27.692923 master-0 kubenswrapper[19715]: I0313 12:59:27.692622 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.692987 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"operator-dockercfg-kcbnp" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.693118 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.693178 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.693475 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.693484 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 12:59:27.693821 master-0 kubenswrapper[19715]: I0313 12:59:27.693502 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-7pbjup2gcsfqa" Mar 13 12:59:27.716217 master-0 kubenswrapper[19715]: I0313 12:59:27.716133 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 12:59:27.752102 master-0 kubenswrapper[19715]: I0313 12:59:27.746125 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 12:59:27.752102 master-0 kubenswrapper[19715]: I0313 12:59:27.750995 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 12:59:27.778621 master-0 kubenswrapper[19715]: I0313 12:59:27.777618 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 12:59:27.794629 master-0 kubenswrapper[19715]: I0313 12:59:27.788636 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 12:59:27.856835 master-0 kubenswrapper[19715]: I0313 12:59:27.856768 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 12:59:27.926704 master-0 kubenswrapper[19715]: I0313 12:59:27.926623 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 12:59:27.963918 master-0 kubenswrapper[19715]: I0313 12:59:27.963750 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 12:59:27.994402 master-0 kubenswrapper[19715]: I0313 12:59:27.994326 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 12:59:27.998086 master-0 kubenswrapper[19715]: I0313 12:59:27.998050 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 12:59:28.027810 master-0 kubenswrapper[19715]: I0313 12:59:28.027671 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 12:59:28.075162 master-0 kubenswrapper[19715]: I0313 12:59:28.075086 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 12:59:28.081659 master-0 kubenswrapper[19715]: I0313 12:59:28.081567 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-qmg42" Mar 13 12:59:28.226274 master-0 kubenswrapper[19715]: I0313 12:59:28.226028 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 12:59:28.292735 master-0 kubenswrapper[19715]: I0313 12:59:28.292651 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 12:59:28.294241 master-0 kubenswrapper[19715]: I0313 12:59:28.294164 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 12:59:28.295603 master-0 kubenswrapper[19715]: I0313 12:59:28.295529 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-mjc6s" Mar 13 12:59:28.338646 master-0 kubenswrapper[19715]: I0313 12:59:28.337376 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-gbnht" Mar 13 12:59:28.358624 master-0 kubenswrapper[19715]: I0313 12:59:28.355949 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 12:59:28.491526 master-0 kubenswrapper[19715]: I0313 12:59:28.491341 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 12:59:28.522513 master-0 kubenswrapper[19715]: I0313 12:59:28.522408 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 12:59:28.575464 master-0 kubenswrapper[19715]: I0313 12:59:28.575389 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 12:59:28.603873 master-0 kubenswrapper[19715]: I0313 12:59:28.603789 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 12:59:28.638167 master-0 kubenswrapper[19715]: I0313 12:59:28.638081 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 12:59:28.689061 master-0 kubenswrapper[19715]: I0313 12:59:28.688979 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 12:59:28.801077 master-0 kubenswrapper[19715]: I0313 12:59:28.800210 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 12:59:28.829746 master-0 kubenswrapper[19715]: I0313 12:59:28.829646 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 12:59:28.839302 master-0 kubenswrapper[19715]: I0313 12:59:28.839023 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 12:59:28.855825 master-0 kubenswrapper[19715]: I0313 12:59:28.855754 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 12:59:28.948070 master-0 kubenswrapper[19715]: I0313 12:59:28.947982 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-8qwx8" Mar 13 12:59:28.949164 master-0 kubenswrapper[19715]: I0313 12:59:28.949118 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-cjs56" Mar 13 12:59:28.967965 master-0 kubenswrapper[19715]: I0313 12:59:28.967866 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 12:59:28.981039 master-0 kubenswrapper[19715]: I0313 12:59:28.980681 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 12:59:28.991151 master-0 kubenswrapper[19715]: I0313 12:59:28.991079 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-7gls2" Mar 13 12:59:29.039922 master-0 kubenswrapper[19715]: I0313 12:59:29.039826 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 12:59:29.049587 master-0 kubenswrapper[19715]: I0313 12:59:29.049512 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 12:59:29.051779 master-0 kubenswrapper[19715]: I0313 12:59:29.051651 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 12:59:29.059655 master-0 kubenswrapper[19715]: I0313 12:59:29.059592 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 12:59:29.088376 master-0 kubenswrapper[19715]: I0313 12:59:29.088286 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-xv4qd" Mar 13 12:59:29.119167 master-0 kubenswrapper[19715]: I0313 12:59:29.118897 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-7t467" Mar 13 12:59:29.133789 master-0 kubenswrapper[19715]: I0313 12:59:29.133469 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 12:59:29.152400 master-0 kubenswrapper[19715]: I0313 12:59:29.152147 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 12:59:29.158244 master-0 kubenswrapper[19715]: I0313 12:59:29.158205 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 12:59:29.263324 master-0 kubenswrapper[19715]: I0313 12:59:29.263235 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 12:59:29.325551 master-0 kubenswrapper[19715]: I0313 12:59:29.324194 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-dockercfg-9n5pq" Mar 13 12:59:29.364163 master-0 kubenswrapper[19715]: I0313 12:59:29.364091 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 12:59:29.364163 master-0 kubenswrapper[19715]: I0313 12:59:29.364091 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 12:59:29.365712 master-0 kubenswrapper[19715]: I0313 12:59:29.365679 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 12:59:29.458499 master-0 kubenswrapper[19715]: I0313 12:59:29.458433 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-6slw7" Mar 13 12:59:29.464366 master-0 kubenswrapper[19715]: I0313 12:59:29.464296 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 12:59:29.485007 master-0 kubenswrapper[19715]: I0313 12:59:29.484951 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-8qlr6" Mar 13 12:59:29.550569 master-0 kubenswrapper[19715]: I0313 12:59:29.550491 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 12:59:29.556917 master-0 kubenswrapper[19715]: I0313 12:59:29.556882 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 12:59:29.728472 master-0 kubenswrapper[19715]: I0313 12:59:29.728275 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 12:59:29.744876 master-0 kubenswrapper[19715]: I0313 12:59:29.744798 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 12:59:29.810934 master-0 kubenswrapper[19715]: I0313 12:59:29.810821 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jwq7f" Mar 13 12:59:29.816545 master-0 kubenswrapper[19715]: I0313 12:59:29.816455 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 12:59:29.826012 master-0 kubenswrapper[19715]: I0313 12:59:29.825944 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 12:59:29.836134 master-0 kubenswrapper[19715]: I0313 12:59:29.836083 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 12:59:29.855439 master-0 kubenswrapper[19715]: I0313 12:59:29.855361 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 12:59:29.886621 master-0 kubenswrapper[19715]: I0313 12:59:29.886524 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 12:59:29.936625 master-0 kubenswrapper[19715]: I0313 12:59:29.936544 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 12:59:29.986630 master-0 kubenswrapper[19715]: I0313 12:59:29.986429 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-78fwj" Mar 13 12:59:30.049858 master-0 kubenswrapper[19715]: I0313 12:59:30.049777 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 12:59:30.053055 master-0 kubenswrapper[19715]: I0313 12:59:30.052993 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 12:59:30.150594 master-0 kubenswrapper[19715]: I0313 12:59:30.150497 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 12:59:30.273725 master-0 kubenswrapper[19715]: I0313 12:59:30.271445 19715 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 12:59:30.281874 master-0 kubenswrapper[19715]: I0313 12:59:30.281761 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 12:59:30.297638 master-0 kubenswrapper[19715]: I0313 12:59:30.296102 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 12:59:30.444283 master-0 kubenswrapper[19715]: I0313 12:59:30.444183 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 12:59:30.606915 master-0 kubenswrapper[19715]: I0313 12:59:30.606703 19715 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 12:59:30.667013 master-0 kubenswrapper[19715]: I0313 12:59:30.666924 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 12:59:30.688156 master-0 kubenswrapper[19715]: I0313 12:59:30.688050 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-qggps" Mar 13 12:59:30.697370 master-0 kubenswrapper[19715]: I0313 12:59:30.697296 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-comnvpv6eh6ml" Mar 13 12:59:30.747179 master-0 kubenswrapper[19715]: I0313 12:59:30.747062 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 12:59:30.790708 master-0 kubenswrapper[19715]: I0313 12:59:30.790617 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 12:59:30.791157 master-0 kubenswrapper[19715]: I0313 12:59:30.790890 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-5s2w7" Mar 13 12:59:30.820825 master-0 kubenswrapper[19715]: I0313 12:59:30.820733 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 12:59:30.838356 master-0 kubenswrapper[19715]: I0313 12:59:30.838280 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 12:59:30.867650 master-0 kubenswrapper[19715]: I0313 12:59:30.867444 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 12:59:30.952046 master-0 kubenswrapper[19715]: I0313 12:59:30.951937 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 12:59:30.952046 master-0 kubenswrapper[19715]: I0313 12:59:30.951936 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 12:59:30.960510 master-0 kubenswrapper[19715]: I0313 12:59:30.960442 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-ct6jh" Mar 13 12:59:30.964057 master-0 kubenswrapper[19715]: I0313 12:59:30.964021 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 12:59:30.967627 master-0 kubenswrapper[19715]: I0313 12:59:30.967508 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 12:59:30.996020 master-0 kubenswrapper[19715]: I0313 12:59:30.995938 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 12:59:31.047715 master-0 kubenswrapper[19715]: I0313 12:59:31.047608 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 12:59:31.064380 master-0 kubenswrapper[19715]: I0313 12:59:31.064288 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-h5lt2" Mar 13 12:59:31.120761 master-0 kubenswrapper[19715]: I0313 12:59:31.110039 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 12:59:31.120761 master-0 kubenswrapper[19715]: I0313 12:59:31.112831 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 12:59:31.133153 master-0 kubenswrapper[19715]: I0313 12:59:31.132071 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-mwrx7" Mar 13 12:59:31.186877 master-0 kubenswrapper[19715]: I0313 12:59:31.186798 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 12:59:31.203730 master-0 kubenswrapper[19715]: I0313 12:59:31.203612 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 12:59:31.231784 master-0 kubenswrapper[19715]: I0313 12:59:31.231489 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 12:59:31.259602 master-0 kubenswrapper[19715]: I0313 12:59:31.257177 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 12:59:31.332148 master-0 kubenswrapper[19715]: I0313 12:59:31.331235 19715 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 12:59:31.332148 master-0 kubenswrapper[19715]: I0313 12:59:31.331793 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" containerID="cri-o://80ebbe3e603359bd048440fc40f845e6a835449f423798047a8be9d5a8e84ded" gracePeriod=5 Mar 13 12:59:31.340499 master-0 kubenswrapper[19715]: I0313 12:59:31.340429 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 12:59:31.410286 master-0 kubenswrapper[19715]: I0313 12:59:31.410184 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 12:59:31.510484 master-0 kubenswrapper[19715]: I0313 12:59:31.510387 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 12:59:31.579235 master-0 kubenswrapper[19715]: I0313 12:59:31.579139 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 12:59:31.635898 master-0 kubenswrapper[19715]: I0313 12:59:31.635825 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 12:59:31.636283 master-0 kubenswrapper[19715]: I0313 12:59:31.635931 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 12:59:31.681861 master-0 kubenswrapper[19715]: I0313 12:59:31.680127 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 12:59:31.695145 master-0 kubenswrapper[19715]: I0313 12:59:31.695083 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 12:59:31.702792 master-0 kubenswrapper[19715]: I0313 12:59:31.702714 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 12:59:31.729624 master-0 kubenswrapper[19715]: I0313 12:59:31.726217 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 12:59:31.781516 master-0 kubenswrapper[19715]: I0313 12:59:31.781346 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:59:31.795673 master-0 kubenswrapper[19715]: I0313 12:59:31.792831 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 12:59:31.798846 master-0 kubenswrapper[19715]: I0313 12:59:31.798230 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 12:59:31.827763 master-0 kubenswrapper[19715]: I0313 12:59:31.827656 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-l78bb" Mar 13 12:59:31.857326 master-0 kubenswrapper[19715]: I0313 12:59:31.857098 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 12:59:31.960974 master-0 kubenswrapper[19715]: I0313 12:59:31.960791 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 12:59:32.030317 master-0 kubenswrapper[19715]: I0313 12:59:32.030232 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 12:59:32.150343 master-0 kubenswrapper[19715]: I0313 12:59:32.149989 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 12:59:32.170169 master-0 kubenswrapper[19715]: I0313 12:59:32.169984 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 12:59:32.259412 master-0 kubenswrapper[19715]: I0313 12:59:32.259237 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 12:59:32.299180 master-0 kubenswrapper[19715]: I0313 12:59:32.299090 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 12:59:32.310544 master-0 kubenswrapper[19715]: I0313 12:59:32.310457 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 12:59:32.310544 master-0 kubenswrapper[19715]: I0313 12:59:32.310544 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 12:59:32.326885 master-0 kubenswrapper[19715]: I0313 12:59:32.326776 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 12:59:32.338230 master-0 kubenswrapper[19715]: I0313 12:59:32.338161 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 12:59:32.393994 master-0 kubenswrapper[19715]: I0313 12:59:32.393907 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 12:59:32.448648 master-0 kubenswrapper[19715]: I0313 12:59:32.448531 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 12:59:32.498867 master-0 kubenswrapper[19715]: I0313 12:59:32.498781 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 12:59:32.530204 master-0 kubenswrapper[19715]: I0313 12:59:32.529968 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 12:59:32.634250 master-0 kubenswrapper[19715]: I0313 12:59:32.634168 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 12:59:32.711459 master-0 kubenswrapper[19715]: I0313 12:59:32.711393 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 12:59:32.771855 master-0 kubenswrapper[19715]: I0313 12:59:32.770459 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-7fzhf" Mar 13 12:59:32.772286 master-0 kubenswrapper[19715]: I0313 12:59:32.771944 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 12:59:32.890092 master-0 kubenswrapper[19715]: I0313 12:59:32.890024 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-89sxl" Mar 13 12:59:32.908602 master-0 kubenswrapper[19715]: I0313 12:59:32.908474 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 12:59:32.954631 master-0 kubenswrapper[19715]: I0313 12:59:32.954528 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xg9t5" Mar 13 12:59:32.979979 master-0 kubenswrapper[19715]: I0313 12:59:32.978868 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 12:59:33.068307 master-0 kubenswrapper[19715]: I0313 12:59:33.068207 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 12:59:33.072883 master-0 kubenswrapper[19715]: I0313 12:59:33.072809 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 12:59:33.074437 master-0 kubenswrapper[19715]: I0313 12:59:33.074382 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-dockercfg-gft2f" Mar 13 12:59:33.115214 master-0 kubenswrapper[19715]: I0313 12:59:33.115108 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 12:59:33.146707 master-0 kubenswrapper[19715]: I0313 12:59:33.146464 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 12:59:33.198207 master-0 kubenswrapper[19715]: I0313 12:59:33.198101 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 12:59:33.211705 master-0 kubenswrapper[19715]: I0313 12:59:33.211594 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 12:59:33.225906 master-0 kubenswrapper[19715]: I0313 12:59:33.225810 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 12:59:33.287871 master-0 kubenswrapper[19715]: I0313 12:59:33.287784 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 12:59:33.521946 master-0 kubenswrapper[19715]: I0313 12:59:33.521654 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 12:59:33.525768 master-0 kubenswrapper[19715]: I0313 12:59:33.525704 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 12:59:33.583432 master-0 kubenswrapper[19715]: I0313 12:59:33.583331 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 12:59:33.611328 master-0 kubenswrapper[19715]: I0313 12:59:33.611222 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 12:59:33.653144 master-0 kubenswrapper[19715]: I0313 12:59:33.653041 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 12:59:33.716271 master-0 kubenswrapper[19715]: I0313 12:59:33.716188 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 12:59:33.758104 master-0 kubenswrapper[19715]: I0313 12:59:33.757987 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 12:59:33.830856 master-0 kubenswrapper[19715]: I0313 12:59:33.830565 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 12:59:33.872863 master-0 kubenswrapper[19715]: I0313 12:59:33.872680 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 12:59:33.878611 master-0 kubenswrapper[19715]: I0313 12:59:33.878549 19715 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 12:59:33.977518 master-0 kubenswrapper[19715]: I0313 12:59:33.977418 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 12:59:34.177374 master-0 kubenswrapper[19715]: I0313 12:59:34.177244 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 12:59:34.258330 master-0 kubenswrapper[19715]: I0313 12:59:34.258216 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 12:59:34.260687 master-0 kubenswrapper[19715]: I0313 12:59:34.260621 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 12:59:34.278048 master-0 kubenswrapper[19715]: I0313 12:59:34.277750 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-dockercfg-46jst" Mar 13 12:59:34.291929 master-0 kubenswrapper[19715]: I0313 12:59:34.291837 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 12:59:34.316467 master-0 kubenswrapper[19715]: I0313 12:59:34.316345 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 12:59:34.481830 master-0 kubenswrapper[19715]: I0313 12:59:34.481617 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 12:59:34.485507 master-0 kubenswrapper[19715]: I0313 12:59:34.485456 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 12:59:34.719174 master-0 kubenswrapper[19715]: I0313 12:59:34.719075 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 12:59:34.743085 master-0 kubenswrapper[19715]: I0313 12:59:34.742909 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 12:59:34.797982 master-0 kubenswrapper[19715]: I0313 12:59:34.797867 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 12:59:34.946565 master-0 kubenswrapper[19715]: I0313 12:59:34.946379 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 12:59:35.024719 master-0 kubenswrapper[19715]: I0313 12:59:35.024484 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 12:59:35.041237 master-0 kubenswrapper[19715]: I0313 12:59:35.041151 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 12:59:35.158627 master-0 kubenswrapper[19715]: I0313 12:59:35.158535 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 12:59:35.331791 master-0 kubenswrapper[19715]: I0313 12:59:35.331637 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 12:59:35.338312 master-0 kubenswrapper[19715]: I0313 12:59:35.338237 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 12:59:35.464095 master-0 kubenswrapper[19715]: I0313 12:59:35.463979 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 12:59:36.344715 master-0 kubenswrapper[19715]: I0313 12:59:36.344639 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:59:36.349894 master-0 kubenswrapper[19715]: I0313 12:59:36.349844 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 12:59:36.536360 master-0 kubenswrapper[19715]: I0313 12:59:36.536265 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 12:59:36.704136 master-0 kubenswrapper[19715]: I0313 12:59:36.704018 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 12:59:36.840001 master-0 kubenswrapper[19715]: I0313 12:59:36.839914 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 13 12:59:36.840001 master-0 kubenswrapper[19715]: I0313 12:59:36.840001 19715 generic.go:334] "Generic (PLEG): container finished" podID="a814bd60de133d95cf99630a978c017e" containerID="80ebbe3e603359bd048440fc40f845e6a835449f423798047a8be9d5a8e84ded" exitCode=137 Mar 13 12:59:36.922237 master-0 kubenswrapper[19715]: I0313 12:59:36.922149 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 13 12:59:36.922567 master-0 kubenswrapper[19715]: I0313 12:59:36.922352 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:59:37.049735 master-0 kubenswrapper[19715]: I0313 12:59:37.049498 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 13 12:59:37.049735 master-0 kubenswrapper[19715]: I0313 12:59:37.049673 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 13 12:59:37.049735 master-0 kubenswrapper[19715]: I0313 12:59:37.049672 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests" (OuterVolumeSpecName: "manifests") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:59:37.050184 master-0 kubenswrapper[19715]: I0313 12:59:37.049824 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 13 12:59:37.050184 master-0 kubenswrapper[19715]: I0313 12:59:37.049918 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 13 12:59:37.050184 master-0 kubenswrapper[19715]: I0313 12:59:37.049906 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log" (OuterVolumeSpecName: "var-log") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:59:37.050184 master-0 kubenswrapper[19715]: I0313 12:59:37.050021 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") pod \"a814bd60de133d95cf99630a978c017e\" (UID: \"a814bd60de133d95cf99630a978c017e\") " Mar 13 12:59:37.050184 master-0 kubenswrapper[19715]: I0313 12:59:37.050068 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:59:37.050448 master-0 kubenswrapper[19715]: I0313 12:59:37.050346 19715 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:37.050448 master-0 kubenswrapper[19715]: I0313 12:59:37.050365 19715 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:37.050683 master-0 kubenswrapper[19715]: I0313 12:59:37.049953 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock" (OuterVolumeSpecName: "var-lock") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:59:37.061112 master-0 kubenswrapper[19715]: I0313 12:59:37.060955 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "a814bd60de133d95cf99630a978c017e" (UID: "a814bd60de133d95cf99630a978c017e"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 12:59:37.153095 master-0 kubenswrapper[19715]: I0313 12:59:37.152984 19715 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:37.153095 master-0 kubenswrapper[19715]: I0313 12:59:37.153065 19715 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:37.153095 master-0 kubenswrapper[19715]: I0313 12:59:37.153082 19715 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/a814bd60de133d95cf99630a978c017e-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 12:59:37.705842 master-0 kubenswrapper[19715]: I0313 12:59:37.705732 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a814bd60de133d95cf99630a978c017e" path="/var/lib/kubelet/pods/a814bd60de133d95cf99630a978c017e/volumes" Mar 13 12:59:37.851965 master-0 kubenswrapper[19715]: I0313 12:59:37.851823 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_a814bd60de133d95cf99630a978c017e/startup-monitor/0.log" Mar 13 12:59:37.852859 master-0 kubenswrapper[19715]: I0313 12:59:37.852045 19715 scope.go:117] "RemoveContainer" containerID="80ebbe3e603359bd048440fc40f845e6a835449f423798047a8be9d5a8e84ded" Mar 13 12:59:37.852859 master-0 kubenswrapper[19715]: I0313 12:59:37.852160 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 12:59:45.286441 master-0 kubenswrapper[19715]: I0313 12:59:45.286353 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 12:59:47.276452 master-0 kubenswrapper[19715]: I0313 12:59:47.276367 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 12:59:53.065108 master-0 kubenswrapper[19715]: I0313 12:59:53.065047 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 12:59:54.017187 master-0 kubenswrapper[19715]: I0313 12:59:54.017113 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/1.log" Mar 13 12:59:54.019278 master-0 kubenswrapper[19715]: I0313 12:59:54.019114 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/0.log" Mar 13 12:59:54.019278 master-0 kubenswrapper[19715]: I0313 12:59:54.019188 19715 generic.go:334] "Generic (PLEG): container finished" podID="a38f6c36de78a5cb446093c52f21a20d" containerID="7ad9a1cca6a8d8e8455dfe6d82a9c3bc49497968cec1a56ae1c60cb085816adf" exitCode=137 Mar 13 12:59:54.019278 master-0 kubenswrapper[19715]: I0313 12:59:54.019241 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerDied","Data":"7ad9a1cca6a8d8e8455dfe6d82a9c3bc49497968cec1a56ae1c60cb085816adf"} Mar 13 12:59:54.019478 master-0 kubenswrapper[19715]: I0313 12:59:54.019319 19715 scope.go:117] "RemoveContainer" containerID="f357e81908ad10f5e64bb39fb55781d86cbe402be9ed1b0dcf2bf8216bedd524" Mar 13 12:59:55.031439 master-0 kubenswrapper[19715]: I0313 12:59:55.031377 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/1.log" Mar 13 12:59:55.032862 master-0 kubenswrapper[19715]: I0313 12:59:55.032826 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"a38f6c36de78a5cb446093c52f21a20d","Type":"ContainerStarted","Data":"81c7d89ea63481d87a6ebed44c9b12ef584032a99f703467e8aea1ae97270f69"} Mar 13 12:59:55.504359 master-0 kubenswrapper[19715]: I0313 12:59:55.504294 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 12:59:55.605969 master-0 kubenswrapper[19715]: I0313 12:59:55.605908 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 12:59:55.625426 master-0 kubenswrapper[19715]: I0313 12:59:55.625381 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 12:59:55.719793 master-0 kubenswrapper[19715]: I0313 12:59:55.719698 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 12:59:57.359525 master-0 kubenswrapper[19715]: I0313 12:59:57.359444 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 12:59:58.852118 master-0 kubenswrapper[19715]: I0313 12:59:58.852017 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 12:59:59.133529 master-0 kubenswrapper[19715]: I0313 12:59:59.133472 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 12:59:59.271778 master-0 kubenswrapper[19715]: I0313 12:59:59.271468 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 13:00:03.769901 master-0 kubenswrapper[19715]: I0313 13:00:03.769773 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:03.769901 master-0 kubenswrapper[19715]: I0313 13:00:03.769911 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:03.778788 master-0 kubenswrapper[19715]: I0313 13:00:03.778686 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:04.117430 master-0 kubenswrapper[19715]: I0313 13:00:04.116964 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 13:00:04.198059 master-0 kubenswrapper[19715]: I0313 13:00:04.198002 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 13:00:07.840977 master-0 kubenswrapper[19715]: I0313 13:00:07.840739 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 13:00:08.558082 master-0 kubenswrapper[19715]: I0313 13:00:08.558012 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 13:00:10.533979 master-0 kubenswrapper[19715]: I0313 13:00:10.533833 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 13:00:14.238040 master-0 kubenswrapper[19715]: I0313 13:00:14.237955 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 13:00:15.034773 master-0 kubenswrapper[19715]: I0313 13:00:15.034680 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 13:00:15.249516 master-0 kubenswrapper[19715]: I0313 13:00:15.245800 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 13:00:25.718186 master-0 kubenswrapper[19715]: I0313 13:00:25.718093 19715 patch_prober.go:28] interesting pod/machine-config-daemon-mlgxw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 13:00:25.719091 master-0 kubenswrapper[19715]: I0313 13:00:25.718254 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-mlgxw" podUID="e8d83309-58b2-40af-ab48-1f8b9aeffefb" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 13:00:34.563174 master-0 kubenswrapper[19715]: I0313 13:00:34.563089 19715 scope.go:117] "RemoveContainer" containerID="9727c8d2dac755dd7ea1b9ad8ff6c17a8b645c1accc27700962725c430cd1484" Mar 13 13:00:34.580530 master-0 kubenswrapper[19715]: I0313 13:00:34.580479 19715 scope.go:117] "RemoveContainer" containerID="6e9a116bda80ce7fe4e93d1c23741a0678a4bf66c268c954fc757c04183b5157" Mar 13 13:00:34.599649 master-0 kubenswrapper[19715]: I0313 13:00:34.599605 19715 scope.go:117] "RemoveContainer" containerID="3676744a93dc4b275eb6a7cc11028760f14bb722b4e049db371fc67c6d22dd94" Mar 13 13:00:40.344093 master-0 kubenswrapper[19715]: I0313 13:00:40.343899 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-7fdf5454d9-tzhsm" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" containerID="cri-o://874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856" gracePeriod=15 Mar 13 13:00:40.838718 master-0 kubenswrapper[19715]: I0313 13:00:40.837910 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fdf5454d9-tzhsm_897ba022-a904-4e2f-9317-d675122727fd/console/1.log" Mar 13 13:00:40.838718 master-0 kubenswrapper[19715]: I0313 13:00:40.838529 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fdf5454d9-tzhsm_897ba022-a904-4e2f-9317-d675122727fd/console/0.log" Mar 13 13:00:40.838718 master-0 kubenswrapper[19715]: I0313 13:00:40.838659 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 13:00:41.011010 master-0 kubenswrapper[19715]: I0313 13:00:41.010914 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011010 master-0 kubenswrapper[19715]: I0313 13:00:41.011034 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-js2jf\" (UniqueName: \"kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011396 master-0 kubenswrapper[19715]: I0313 13:00:41.011062 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011396 master-0 kubenswrapper[19715]: I0313 13:00:41.011128 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011396 master-0 kubenswrapper[19715]: I0313 13:00:41.011184 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011396 master-0 kubenswrapper[19715]: I0313 13:00:41.011218 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.011396 master-0 kubenswrapper[19715]: I0313 13:00:41.011243 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config\") pod \"897ba022-a904-4e2f-9317-d675122727fd\" (UID: \"897ba022-a904-4e2f-9317-d675122727fd\") " Mar 13 13:00:41.012537 master-0 kubenswrapper[19715]: I0313 13:00:41.011917 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config" (OuterVolumeSpecName: "console-config") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:00:41.012537 master-0 kubenswrapper[19715]: I0313 13:00:41.011916 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:00:41.012537 master-0 kubenswrapper[19715]: I0313 13:00:41.012302 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca" (OuterVolumeSpecName: "service-ca") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:00:41.012537 master-0 kubenswrapper[19715]: I0313 13:00:41.012330 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:00:41.022678 master-0 kubenswrapper[19715]: I0313 13:00:41.016055 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:00:41.022678 master-0 kubenswrapper[19715]: I0313 13:00:41.016344 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:00:41.022678 master-0 kubenswrapper[19715]: I0313 13:00:41.016513 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf" (OuterVolumeSpecName: "kube-api-access-js2jf") pod "897ba022-a904-4e2f-9317-d675122727fd" (UID: "897ba022-a904-4e2f-9317-d675122727fd"). InnerVolumeSpecName "kube-api-access-js2jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114641 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-js2jf\" (UniqueName: \"kubernetes.io/projected/897ba022-a904-4e2f-9317-d675122727fd-kube-api-access-js2jf\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114718 19715 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114731 19715 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114746 19715 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114775 19715 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.114755 master-0 kubenswrapper[19715]: I0313 13:00:41.114786 19715 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/897ba022-a904-4e2f-9317-d675122727fd-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.115333 master-0 kubenswrapper[19715]: I0313 13:00:41.114797 19715 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/897ba022-a904-4e2f-9317-d675122727fd-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:00:41.443909 master-0 kubenswrapper[19715]: I0313 13:00:41.443824 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fdf5454d9-tzhsm_897ba022-a904-4e2f-9317-d675122727fd/console/1.log" Mar 13 13:00:41.445486 master-0 kubenswrapper[19715]: I0313 13:00:41.445441 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7fdf5454d9-tzhsm_897ba022-a904-4e2f-9317-d675122727fd/console/0.log" Mar 13 13:00:41.445558 master-0 kubenswrapper[19715]: I0313 13:00:41.445524 19715 generic.go:334] "Generic (PLEG): container finished" podID="897ba022-a904-4e2f-9317-d675122727fd" containerID="874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856" exitCode=2 Mar 13 13:00:41.445620 master-0 kubenswrapper[19715]: I0313 13:00:41.445595 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerDied","Data":"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856"} Mar 13 13:00:41.445679 master-0 kubenswrapper[19715]: I0313 13:00:41.445648 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7fdf5454d9-tzhsm" event={"ID":"897ba022-a904-4e2f-9317-d675122727fd","Type":"ContainerDied","Data":"b6f6002eee7cb91a4bce1868d537e329216682ee1398b8b512268f5d8f40b09a"} Mar 13 13:00:41.445679 master-0 kubenswrapper[19715]: I0313 13:00:41.445675 19715 scope.go:117] "RemoveContainer" containerID="874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856" Mar 13 13:00:41.445821 master-0 kubenswrapper[19715]: I0313 13:00:41.445777 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7fdf5454d9-tzhsm" Mar 13 13:00:41.464857 master-0 kubenswrapper[19715]: I0313 13:00:41.464795 19715 scope.go:117] "RemoveContainer" containerID="3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" Mar 13 13:00:41.491120 master-0 kubenswrapper[19715]: I0313 13:00:41.491030 19715 scope.go:117] "RemoveContainer" containerID="874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856" Mar 13 13:00:41.497368 master-0 kubenswrapper[19715]: E0313 13:00:41.497288 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856\": container with ID starting with 874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856 not found: ID does not exist" containerID="874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856" Mar 13 13:00:41.497368 master-0 kubenswrapper[19715]: I0313 13:00:41.497359 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856"} err="failed to get container status \"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856\": rpc error: code = NotFound desc = could not find container \"874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856\": container with ID starting with 874db50e8b21ea692c1ebdbbd46d0b9cf3a732f6006ea0b5b2a56f21d7825856 not found: ID does not exist" Mar 13 13:00:41.497826 master-0 kubenswrapper[19715]: I0313 13:00:41.497397 19715 scope.go:117] "RemoveContainer" containerID="3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" Mar 13 13:00:41.497935 master-0 kubenswrapper[19715]: I0313 13:00:41.497899 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 13:00:41.498042 master-0 kubenswrapper[19715]: E0313 13:00:41.497958 19715 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af\": container with ID starting with 3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af not found: ID does not exist" containerID="3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af" Mar 13 13:00:41.498121 master-0 kubenswrapper[19715]: I0313 13:00:41.498055 19715 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af"} err="failed to get container status \"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af\": rpc error: code = NotFound desc = could not find container \"3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af\": container with ID starting with 3c06fb29b3b49af7b94660aa46417d985ee184a1a8a922ff5902c8073b5172af not found: ID does not exist" Mar 13 13:00:41.503884 master-0 kubenswrapper[19715]: I0313 13:00:41.503822 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-7fdf5454d9-tzhsm"] Mar 13 13:00:41.726077 master-0 kubenswrapper[19715]: I0313 13:00:41.724400 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="897ba022-a904-4e2f-9317-d675122727fd" path="/var/lib/kubelet/pods/897ba022-a904-4e2f-9317-d675122727fd/volumes" Mar 13 13:01:34.640164 master-0 kubenswrapper[19715]: I0313 13:01:34.639912 19715 scope.go:117] "RemoveContainer" containerID="91aba06d3555721ac7156a0d1fb3bcdde07eaa20c73d384ae32e60bb0e44531d" Mar 13 13:02:40.845550 master-0 kubenswrapper[19715]: I0313 13:02:40.845424 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m"] Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: E0313 13:02:40.846065 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: I0313 13:02:40.846113 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: E0313 13:02:40.846153 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: I0313 13:02:40.846162 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: E0313 13:02:40.846187 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" containerName="installer" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: I0313 13:02:40.846198 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" containerName="installer" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: E0313 13:02:40.846233 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: I0313 13:02:40.846241 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 13 13:02:40.846595 master-0 kubenswrapper[19715]: I0313 13:02:40.846508 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.846934 master-0 kubenswrapper[19715]: I0313 13:02:40.846615 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="a814bd60de133d95cf99630a978c017e" containerName="startup-monitor" Mar 13 13:02:40.846934 master-0 kubenswrapper[19715]: I0313 13:02:40.846647 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c65fa87-b404-4e2d-b730-d8e3ae5a0990" containerName="installer" Mar 13 13:02:40.849212 master-0 kubenswrapper[19715]: I0313 13:02:40.846998 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="897ba022-a904-4e2f-9317-d675122727fd" containerName="console" Mar 13 13:02:40.849212 master-0 kubenswrapper[19715]: I0313 13:02:40.848576 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:40.862606 master-0 kubenswrapper[19715]: I0313 13:02:40.862480 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m"] Mar 13 13:02:40.945160 master-0 kubenswrapper[19715]: I0313 13:02:40.945064 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:40.945672 master-0 kubenswrapper[19715]: I0313 13:02:40.945179 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:40.945672 master-0 kubenswrapper[19715]: I0313 13:02:40.945224 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwbxx\" (UniqueName: \"kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.047820 master-0 kubenswrapper[19715]: I0313 13:02:41.047737 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.048323 master-0 kubenswrapper[19715]: I0313 13:02:41.048293 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.048497 master-0 kubenswrapper[19715]: I0313 13:02:41.048434 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwbxx\" (UniqueName: \"kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.048935 master-0 kubenswrapper[19715]: I0313 13:02:41.048855 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.049304 master-0 kubenswrapper[19715]: I0313 13:02:41.049242 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.071025 master-0 kubenswrapper[19715]: I0313 13:02:41.070933 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwbxx\" (UniqueName: \"kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.172502 master-0 kubenswrapper[19715]: I0313 13:02:41.172402 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:41.635922 master-0 kubenswrapper[19715]: I0313 13:02:41.635836 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m"] Mar 13 13:02:41.646233 master-0 kubenswrapper[19715]: W0313 13:02:41.644546 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f9fc8f5_b45e_4d4d_aade_fa3870594ef5.slice/crio-85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e WatchSource:0}: Error finding container 85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e: Status 404 returned error can't find the container with id 85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e Mar 13 13:02:42.150659 master-0 kubenswrapper[19715]: I0313 13:02:42.150524 19715 generic.go:334] "Generic (PLEG): container finished" podID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerID="01338a6292648d30c3b20351a295e22735991b1314e5ae6c788276ee2c140d1b" exitCode=0 Mar 13 13:02:42.150659 master-0 kubenswrapper[19715]: I0313 13:02:42.150619 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" event={"ID":"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5","Type":"ContainerDied","Data":"01338a6292648d30c3b20351a295e22735991b1314e5ae6c788276ee2c140d1b"} Mar 13 13:02:42.151835 master-0 kubenswrapper[19715]: I0313 13:02:42.150898 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" event={"ID":"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5","Type":"ContainerStarted","Data":"85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e"} Mar 13 13:02:42.154313 master-0 kubenswrapper[19715]: I0313 13:02:42.154277 19715 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:02:44.174624 master-0 kubenswrapper[19715]: I0313 13:02:44.174447 19715 generic.go:334] "Generic (PLEG): container finished" podID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerID="d5e37910ca26546f8445a27ceb07ae7637107a7c373e578e697a19a27f159e19" exitCode=0 Mar 13 13:02:44.174624 master-0 kubenswrapper[19715]: I0313 13:02:44.174570 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" event={"ID":"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5","Type":"ContainerDied","Data":"d5e37910ca26546f8445a27ceb07ae7637107a7c373e578e697a19a27f159e19"} Mar 13 13:02:45.198149 master-0 kubenswrapper[19715]: I0313 13:02:45.197889 19715 generic.go:334] "Generic (PLEG): container finished" podID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerID="4d0909d089c1586cc10dd993063a862dc2123771df41943ed4cafdb7ec35bb2c" exitCode=0 Mar 13 13:02:45.198149 master-0 kubenswrapper[19715]: I0313 13:02:45.197994 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" event={"ID":"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5","Type":"ContainerDied","Data":"4d0909d089c1586cc10dd993063a862dc2123771df41943ed4cafdb7ec35bb2c"} Mar 13 13:02:46.536227 master-0 kubenswrapper[19715]: I0313 13:02:46.536152 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:46.631615 master-0 kubenswrapper[19715]: I0313 13:02:46.631523 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle\") pod \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " Mar 13 13:02:46.631615 master-0 kubenswrapper[19715]: I0313 13:02:46.631618 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mwbxx\" (UniqueName: \"kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx\") pod \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " Mar 13 13:02:46.632281 master-0 kubenswrapper[19715]: I0313 13:02:46.631654 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util\") pod \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\" (UID: \"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5\") " Mar 13 13:02:46.632870 master-0 kubenswrapper[19715]: I0313 13:02:46.632781 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle" (OuterVolumeSpecName: "bundle") pod "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" (UID: "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:02:46.636562 master-0 kubenswrapper[19715]: I0313 13:02:46.636452 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx" (OuterVolumeSpecName: "kube-api-access-mwbxx") pod "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" (UID: "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5"). InnerVolumeSpecName "kube-api-access-mwbxx". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:02:46.656245 master-0 kubenswrapper[19715]: I0313 13:02:46.656121 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util" (OuterVolumeSpecName: "util") pod "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" (UID: "9f9fc8f5-b45e-4d4d-aade-fa3870594ef5"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:02:46.734722 master-0 kubenswrapper[19715]: I0313 13:02:46.734442 19715 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:46.734722 master-0 kubenswrapper[19715]: I0313 13:02:46.734512 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mwbxx\" (UniqueName: \"kubernetes.io/projected/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-kube-api-access-mwbxx\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:46.734722 master-0 kubenswrapper[19715]: I0313 13:02:46.734524 19715 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f9fc8f5-b45e-4d4d-aade-fa3870594ef5-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:02:47.221745 master-0 kubenswrapper[19715]: I0313 13:02:47.221658 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" event={"ID":"9f9fc8f5-b45e-4d4d-aade-fa3870594ef5","Type":"ContainerDied","Data":"85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e"} Mar 13 13:02:47.221745 master-0 kubenswrapper[19715]: I0313 13:02:47.221732 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85818634e87698ce63f5fcf98ccc6628c61c840b208600d2df7394749790d67e" Mar 13 13:02:47.222247 master-0 kubenswrapper[19715]: I0313 13:02:47.221769 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d47j85m" Mar 13 13:02:53.655209 master-0 kubenswrapper[19715]: I0313 13:02:53.655111 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-58d685d865-s68xl"] Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: E0313 13:02:53.655569 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="util" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: I0313 13:02:53.655604 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="util" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: E0313 13:02:53.655624 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="pull" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: I0313 13:02:53.655630 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="pull" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: E0313 13:02:53.655666 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="extract" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: I0313 13:02:53.655674 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="extract" Mar 13 13:02:53.656044 master-0 kubenswrapper[19715]: I0313 13:02:53.655874 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f9fc8f5-b45e-4d4d-aade-fa3870594ef5" containerName="extract" Mar 13 13:02:53.656706 master-0 kubenswrapper[19715]: I0313 13:02:53.656670 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.660433 master-0 kubenswrapper[19715]: I0313 13:02:53.660378 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 13 13:02:53.661008 master-0 kubenswrapper[19715]: I0313 13:02:53.660986 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 13 13:02:53.661423 master-0 kubenswrapper[19715]: I0313 13:02:53.661401 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 13 13:02:53.661755 master-0 kubenswrapper[19715]: I0313 13:02:53.661733 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 13 13:02:53.674483 master-0 kubenswrapper[19715]: I0313 13:02:53.674316 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-58d685d865-s68xl"] Mar 13 13:02:53.683542 master-0 kubenswrapper[19715]: I0313 13:02:53.683457 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 13 13:02:53.707890 master-0 kubenswrapper[19715]: I0313 13:02:53.707775 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd47z\" (UniqueName: \"kubernetes.io/projected/2f25cb23-6d50-470e-9f45-203d4a680f46-kube-api-access-sd47z\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.708327 master-0 kubenswrapper[19715]: I0313 13:02:53.707941 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-metrics-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.708327 master-0 kubenswrapper[19715]: I0313 13:02:53.707977 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-apiservice-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.708327 master-0 kubenswrapper[19715]: I0313 13:02:53.708015 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f25cb23-6d50-470e-9f45-203d4a680f46-socket-dir\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.708327 master-0 kubenswrapper[19715]: I0313 13:02:53.708059 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-webhook-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.811022 master-0 kubenswrapper[19715]: I0313 13:02:53.810921 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd47z\" (UniqueName: \"kubernetes.io/projected/2f25cb23-6d50-470e-9f45-203d4a680f46-kube-api-access-sd47z\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.811442 master-0 kubenswrapper[19715]: I0313 13:02:53.811072 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-metrics-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.811442 master-0 kubenswrapper[19715]: I0313 13:02:53.811114 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-apiservice-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.811442 master-0 kubenswrapper[19715]: I0313 13:02:53.811152 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f25cb23-6d50-470e-9f45-203d4a680f46-socket-dir\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.811442 master-0 kubenswrapper[19715]: I0313 13:02:53.811262 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-webhook-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.812601 master-0 kubenswrapper[19715]: I0313 13:02:53.812271 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/2f25cb23-6d50-470e-9f45-203d4a680f46-socket-dir\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.816392 master-0 kubenswrapper[19715]: I0313 13:02:53.816328 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-metrics-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.816960 master-0 kubenswrapper[19715]: I0313 13:02:53.816891 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-apiservice-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.817340 master-0 kubenswrapper[19715]: I0313 13:02:53.817285 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2f25cb23-6d50-470e-9f45-203d4a680f46-webhook-cert\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.868655 master-0 kubenswrapper[19715]: I0313 13:02:53.868508 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd47z\" (UniqueName: \"kubernetes.io/projected/2f25cb23-6d50-470e-9f45-203d4a680f46-kube-api-access-sd47z\") pod \"lvms-operator-58d685d865-s68xl\" (UID: \"2f25cb23-6d50-470e-9f45-203d4a680f46\") " pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:53.981424 master-0 kubenswrapper[19715]: I0313 13:02:53.981264 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:02:54.455665 master-0 kubenswrapper[19715]: I0313 13:02:54.455566 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-58d685d865-s68xl"] Mar 13 13:02:55.304976 master-0 kubenswrapper[19715]: I0313 13:02:55.304849 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-58d685d865-s68xl" event={"ID":"2f25cb23-6d50-470e-9f45-203d4a680f46","Type":"ContainerStarted","Data":"392b7e769e8fa2681869ae24339f7d9dbc6f2cb92890530dcfd65d9311270a7f"} Mar 13 13:03:00.378370 master-0 kubenswrapper[19715]: I0313 13:03:00.378112 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-58d685d865-s68xl" event={"ID":"2f25cb23-6d50-470e-9f45-203d4a680f46","Type":"ContainerStarted","Data":"75ffc08051fbba7edaf20de356afcff403d3da6955cf50c993bf48564b0ccfe2"} Mar 13 13:03:00.379248 master-0 kubenswrapper[19715]: I0313 13:03:00.378889 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:03:00.408816 master-0 kubenswrapper[19715]: I0313 13:03:00.408681 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-58d685d865-s68xl" podStartSLOduration=1.782942259 podStartE2EDuration="7.408607873s" podCreationTimestamp="2026-03-13 13:02:53 +0000 UTC" firstStartedPulling="2026-03-13 13:02:54.465009941 +0000 UTC m=+801.031682688" lastFinishedPulling="2026-03-13 13:03:00.090675545 +0000 UTC m=+806.657348302" observedRunningTime="2026-03-13 13:03:00.403283026 +0000 UTC m=+806.969955803" watchObservedRunningTime="2026-03-13 13:03:00.408607873 +0000 UTC m=+806.975280640" Mar 13 13:03:01.392495 master-0 kubenswrapper[19715]: I0313 13:03:01.392398 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-58d685d865-s68xl" Mar 13 13:03:04.850943 master-0 kubenswrapper[19715]: I0313 13:03:04.850848 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx"] Mar 13 13:03:04.853051 master-0 kubenswrapper[19715]: I0313 13:03:04.853011 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:04.866559 master-0 kubenswrapper[19715]: I0313 13:03:04.866448 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx"] Mar 13 13:03:05.039910 master-0 kubenswrapper[19715]: I0313 13:03:05.039815 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vv44f\" (UniqueName: \"kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.039910 master-0 kubenswrapper[19715]: I0313 13:03:05.039906 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.040378 master-0 kubenswrapper[19715]: I0313 13:03:05.040257 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.142326 master-0 kubenswrapper[19715]: I0313 13:03:05.142245 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.142815 master-0 kubenswrapper[19715]: I0313 13:03:05.142446 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vv44f\" (UniqueName: \"kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.142815 master-0 kubenswrapper[19715]: I0313 13:03:05.142493 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.143058 master-0 kubenswrapper[19715]: I0313 13:03:05.143016 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.143118 master-0 kubenswrapper[19715]: I0313 13:03:05.143077 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.160460 master-0 kubenswrapper[19715]: I0313 13:03:05.160412 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vv44f\" (UniqueName: \"kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.172717 master-0 kubenswrapper[19715]: I0313 13:03:05.172629 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:05.479722 master-0 kubenswrapper[19715]: I0313 13:03:05.478916 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s"] Mar 13 13:03:05.482659 master-0 kubenswrapper[19715]: I0313 13:03:05.482556 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.494709 master-0 kubenswrapper[19715]: I0313 13:03:05.494621 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s"] Mar 13 13:03:05.654979 master-0 kubenswrapper[19715]: I0313 13:03:05.654752 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.654979 master-0 kubenswrapper[19715]: I0313 13:03:05.654875 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8jjb\" (UniqueName: \"kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.654979 master-0 kubenswrapper[19715]: I0313 13:03:05.654929 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.715052 master-0 kubenswrapper[19715]: W0313 13:03:05.714960 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod21e7b1a8_5baa_406f_9769_bcefec1ec69a.slice/crio-357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c WatchSource:0}: Error finding container 357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c: Status 404 returned error can't find the container with id 357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c Mar 13 13:03:05.717297 master-0 kubenswrapper[19715]: I0313 13:03:05.717218 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx"] Mar 13 13:03:05.758350 master-0 kubenswrapper[19715]: I0313 13:03:05.756948 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8jjb\" (UniqueName: \"kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.758350 master-0 kubenswrapper[19715]: I0313 13:03:05.757030 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.758350 master-0 kubenswrapper[19715]: I0313 13:03:05.757501 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.758350 master-0 kubenswrapper[19715]: I0313 13:03:05.758109 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.758350 master-0 kubenswrapper[19715]: I0313 13:03:05.758274 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.778106 master-0 kubenswrapper[19715]: I0313 13:03:05.778035 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8jjb\" (UniqueName: \"kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:05.824419 master-0 kubenswrapper[19715]: I0313 13:03:05.823854 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:06.249362 master-0 kubenswrapper[19715]: I0313 13:03:06.249273 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc"] Mar 13 13:03:06.251333 master-0 kubenswrapper[19715]: I0313 13:03:06.251267 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.278707 master-0 kubenswrapper[19715]: I0313 13:03:06.278635 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc"] Mar 13 13:03:06.319739 master-0 kubenswrapper[19715]: I0313 13:03:06.319683 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s"] Mar 13 13:03:06.378687 master-0 kubenswrapper[19715]: I0313 13:03:06.376868 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j47p7\" (UniqueName: \"kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.378687 master-0 kubenswrapper[19715]: I0313 13:03:06.378491 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.378687 master-0 kubenswrapper[19715]: I0313 13:03:06.378608 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.429086 master-0 kubenswrapper[19715]: I0313 13:03:06.429024 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" event={"ID":"f8bb4265-31d7-459e-aea3-fc4087bdd1f7","Type":"ContainerStarted","Data":"9060420de5f2157e6e57b10cd501f500a0b8f4c7ef32f1eb507282bbe50e8bb0"} Mar 13 13:03:06.431023 master-0 kubenswrapper[19715]: I0313 13:03:06.430977 19715 generic.go:334] "Generic (PLEG): container finished" podID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerID="e41f39f9aa5b95bf5c2e150a0900c6d95f0ac0decf8c5ed50ccda803f8d192db" exitCode=0 Mar 13 13:03:06.431084 master-0 kubenswrapper[19715]: I0313 13:03:06.431039 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerDied","Data":"e41f39f9aa5b95bf5c2e150a0900c6d95f0ac0decf8c5ed50ccda803f8d192db"} Mar 13 13:03:06.431084 master-0 kubenswrapper[19715]: I0313 13:03:06.431079 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerStarted","Data":"357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c"} Mar 13 13:03:06.481006 master-0 kubenswrapper[19715]: I0313 13:03:06.480895 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.481350 master-0 kubenswrapper[19715]: I0313 13:03:06.481149 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j47p7\" (UniqueName: \"kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.482569 master-0 kubenswrapper[19715]: I0313 13:03:06.481369 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.482569 master-0 kubenswrapper[19715]: I0313 13:03:06.481832 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.482569 master-0 kubenswrapper[19715]: I0313 13:03:06.481993 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.503473 master-0 kubenswrapper[19715]: I0313 13:03:06.503375 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j47p7\" (UniqueName: \"kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:06.580914 master-0 kubenswrapper[19715]: I0313 13:03:06.580807 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:07.069935 master-0 kubenswrapper[19715]: I0313 13:03:07.069873 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc"] Mar 13 13:03:07.443853 master-0 kubenswrapper[19715]: I0313 13:03:07.443781 19715 generic.go:334] "Generic (PLEG): container finished" podID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerID="9a2e5aea666e72bcca0af0404644cf6a81a93a922e16974220949d8917b26a37" exitCode=0 Mar 13 13:03:07.444672 master-0 kubenswrapper[19715]: I0313 13:03:07.443885 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" event={"ID":"14a12380-2a76-4a18-bcc6-71c2f2949270","Type":"ContainerDied","Data":"9a2e5aea666e72bcca0af0404644cf6a81a93a922e16974220949d8917b26a37"} Mar 13 13:03:07.444672 master-0 kubenswrapper[19715]: I0313 13:03:07.443926 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" event={"ID":"14a12380-2a76-4a18-bcc6-71c2f2949270","Type":"ContainerStarted","Data":"170ec31df40cf6ad887d63790982af33ffe390ce70884583ae8f98b7e5a55dce"} Mar 13 13:03:07.446575 master-0 kubenswrapper[19715]: I0313 13:03:07.446485 19715 generic.go:334] "Generic (PLEG): container finished" podID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerID="0b8bc021d22cf3c360a972dd53e8cf0942e96df77b413649a90c6dd95e13aacc" exitCode=0 Mar 13 13:03:07.446768 master-0 kubenswrapper[19715]: I0313 13:03:07.446598 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" event={"ID":"f8bb4265-31d7-459e-aea3-fc4087bdd1f7","Type":"ContainerDied","Data":"0b8bc021d22cf3c360a972dd53e8cf0942e96df77b413649a90c6dd95e13aacc"} Mar 13 13:03:09.490653 master-0 kubenswrapper[19715]: I0313 13:03:09.490541 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerStarted","Data":"ba175c925d36688de72b8b8a40797db06241f426f255ecbd7005068c68b47266"} Mar 13 13:03:10.501550 master-0 kubenswrapper[19715]: I0313 13:03:10.501457 19715 generic.go:334] "Generic (PLEG): container finished" podID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerID="a1f55caee0f26bdb18cfa11533883e5c23175a3df741a8b01741e39a914385bd" exitCode=0 Mar 13 13:03:10.502240 master-0 kubenswrapper[19715]: I0313 13:03:10.501596 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" event={"ID":"f8bb4265-31d7-459e-aea3-fc4087bdd1f7","Type":"ContainerDied","Data":"a1f55caee0f26bdb18cfa11533883e5c23175a3df741a8b01741e39a914385bd"} Mar 13 13:03:10.505255 master-0 kubenswrapper[19715]: I0313 13:03:10.505190 19715 generic.go:334] "Generic (PLEG): container finished" podID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerID="c693adc62f93ba1f96a0ce481c0dfaa8307d8dcbcb57090d7e1a69c51b98e2fb" exitCode=0 Mar 13 13:03:10.505360 master-0 kubenswrapper[19715]: I0313 13:03:10.505325 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" event={"ID":"14a12380-2a76-4a18-bcc6-71c2f2949270","Type":"ContainerDied","Data":"c693adc62f93ba1f96a0ce481c0dfaa8307d8dcbcb57090d7e1a69c51b98e2fb"} Mar 13 13:03:10.509549 master-0 kubenswrapper[19715]: I0313 13:03:10.509503 19715 generic.go:334] "Generic (PLEG): container finished" podID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerID="ba175c925d36688de72b8b8a40797db06241f426f255ecbd7005068c68b47266" exitCode=0 Mar 13 13:03:10.509637 master-0 kubenswrapper[19715]: I0313 13:03:10.509583 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerDied","Data":"ba175c925d36688de72b8b8a40797db06241f426f255ecbd7005068c68b47266"} Mar 13 13:03:11.529124 master-0 kubenswrapper[19715]: I0313 13:03:11.529050 19715 generic.go:334] "Generic (PLEG): container finished" podID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerID="4f0149d9ec2b13750f0bdf45de0ebf5c6e841ba63946ef5a89c538d543e1f597" exitCode=0 Mar 13 13:03:11.529124 master-0 kubenswrapper[19715]: I0313 13:03:11.529127 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" event={"ID":"f8bb4265-31d7-459e-aea3-fc4087bdd1f7","Type":"ContainerDied","Data":"4f0149d9ec2b13750f0bdf45de0ebf5c6e841ba63946ef5a89c538d543e1f597"} Mar 13 13:03:11.534488 master-0 kubenswrapper[19715]: I0313 13:03:11.534441 19715 generic.go:334] "Generic (PLEG): container finished" podID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerID="db8d34c6b06c19d9981fdc22a27cefd83dff60078836f68a4a4709e5430ff1c7" exitCode=0 Mar 13 13:03:11.534619 master-0 kubenswrapper[19715]: I0313 13:03:11.534520 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" event={"ID":"14a12380-2a76-4a18-bcc6-71c2f2949270","Type":"ContainerDied","Data":"db8d34c6b06c19d9981fdc22a27cefd83dff60078836f68a4a4709e5430ff1c7"} Mar 13 13:03:11.538076 master-0 kubenswrapper[19715]: I0313 13:03:11.538022 19715 generic.go:334] "Generic (PLEG): container finished" podID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerID="15ebb0aaa909ec337ee98ff332f252396f2377d395466e14321b3b31421dd56b" exitCode=0 Mar 13 13:03:11.538155 master-0 kubenswrapper[19715]: I0313 13:03:11.538087 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerDied","Data":"15ebb0aaa909ec337ee98ff332f252396f2377d395466e14321b3b31421dd56b"} Mar 13 13:03:12.961375 master-0 kubenswrapper[19715]: I0313 13:03:12.961295 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:13.110945 master-0 kubenswrapper[19715]: I0313 13:03:13.110726 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle\") pod \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " Mar 13 13:03:13.111430 master-0 kubenswrapper[19715]: I0313 13:03:13.110956 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util\") pod \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " Mar 13 13:03:13.111430 master-0 kubenswrapper[19715]: I0313 13:03:13.111053 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8jjb\" (UniqueName: \"kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb\") pod \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\" (UID: \"f8bb4265-31d7-459e-aea3-fc4087bdd1f7\") " Mar 13 13:03:13.112191 master-0 kubenswrapper[19715]: I0313 13:03:13.112131 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle" (OuterVolumeSpecName: "bundle") pod "f8bb4265-31d7-459e-aea3-fc4087bdd1f7" (UID: "f8bb4265-31d7-459e-aea3-fc4087bdd1f7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.121749 master-0 kubenswrapper[19715]: I0313 13:03:13.121645 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util" (OuterVolumeSpecName: "util") pod "f8bb4265-31d7-459e-aea3-fc4087bdd1f7" (UID: "f8bb4265-31d7-459e-aea3-fc4087bdd1f7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.168878 master-0 kubenswrapper[19715]: I0313 13:03:13.168514 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb" (OuterVolumeSpecName: "kube-api-access-l8jjb") pod "f8bb4265-31d7-459e-aea3-fc4087bdd1f7" (UID: "f8bb4265-31d7-459e-aea3-fc4087bdd1f7"). InnerVolumeSpecName "kube-api-access-l8jjb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:03:13.219998 master-0 kubenswrapper[19715]: I0313 13:03:13.219922 19715 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.219998 master-0 kubenswrapper[19715]: I0313 13:03:13.219980 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8jjb\" (UniqueName: \"kubernetes.io/projected/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-kube-api-access-l8jjb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.219998 master-0 kubenswrapper[19715]: I0313 13:03:13.219995 19715 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f8bb4265-31d7-459e-aea3-fc4087bdd1f7-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.245698 master-0 kubenswrapper[19715]: I0313 13:03:13.245513 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:13.257287 master-0 kubenswrapper[19715]: I0313 13:03:13.257221 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:13.422635 master-0 kubenswrapper[19715]: I0313 13:03:13.422541 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j47p7\" (UniqueName: \"kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7\") pod \"14a12380-2a76-4a18-bcc6-71c2f2949270\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " Mar 13 13:03:13.423069 master-0 kubenswrapper[19715]: I0313 13:03:13.422706 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle\") pod \"14a12380-2a76-4a18-bcc6-71c2f2949270\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " Mar 13 13:03:13.423069 master-0 kubenswrapper[19715]: I0313 13:03:13.422747 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util\") pod \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " Mar 13 13:03:13.423069 master-0 kubenswrapper[19715]: I0313 13:03:13.422785 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util\") pod \"14a12380-2a76-4a18-bcc6-71c2f2949270\" (UID: \"14a12380-2a76-4a18-bcc6-71c2f2949270\") " Mar 13 13:03:13.423069 master-0 kubenswrapper[19715]: I0313 13:03:13.422924 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle\") pod \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " Mar 13 13:03:13.423069 master-0 kubenswrapper[19715]: I0313 13:03:13.422965 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vv44f\" (UniqueName: \"kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f\") pod \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\" (UID: \"21e7b1a8-5baa-406f-9769-bcefec1ec69a\") " Mar 13 13:03:13.424310 master-0 kubenswrapper[19715]: I0313 13:03:13.424249 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle" (OuterVolumeSpecName: "bundle") pod "14a12380-2a76-4a18-bcc6-71c2f2949270" (UID: "14a12380-2a76-4a18-bcc6-71c2f2949270"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.424725 master-0 kubenswrapper[19715]: I0313 13:03:13.424638 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle" (OuterVolumeSpecName: "bundle") pod "21e7b1a8-5baa-406f-9769-bcefec1ec69a" (UID: "21e7b1a8-5baa-406f-9769-bcefec1ec69a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.426732 master-0 kubenswrapper[19715]: I0313 13:03:13.426676 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7" (OuterVolumeSpecName: "kube-api-access-j47p7") pod "14a12380-2a76-4a18-bcc6-71c2f2949270" (UID: "14a12380-2a76-4a18-bcc6-71c2f2949270"). InnerVolumeSpecName "kube-api-access-j47p7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:03:13.429481 master-0 kubenswrapper[19715]: I0313 13:03:13.429441 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f" (OuterVolumeSpecName: "kube-api-access-vv44f") pod "21e7b1a8-5baa-406f-9769-bcefec1ec69a" (UID: "21e7b1a8-5baa-406f-9769-bcefec1ec69a"). InnerVolumeSpecName "kube-api-access-vv44f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:03:13.435340 master-0 kubenswrapper[19715]: I0313 13:03:13.435239 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util" (OuterVolumeSpecName: "util") pod "21e7b1a8-5baa-406f-9769-bcefec1ec69a" (UID: "21e7b1a8-5baa-406f-9769-bcefec1ec69a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.435536 master-0 kubenswrapper[19715]: I0313 13:03:13.435490 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util" (OuterVolumeSpecName: "util") pod "14a12380-2a76-4a18-bcc6-71c2f2949270" (UID: "14a12380-2a76-4a18-bcc6-71c2f2949270"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:13.533614 master-0 kubenswrapper[19715]: I0313 13:03:13.533522 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vv44f\" (UniqueName: \"kubernetes.io/projected/21e7b1a8-5baa-406f-9769-bcefec1ec69a-kube-api-access-vv44f\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.533910 master-0 kubenswrapper[19715]: I0313 13:03:13.533760 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j47p7\" (UniqueName: \"kubernetes.io/projected/14a12380-2a76-4a18-bcc6-71c2f2949270-kube-api-access-j47p7\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.533910 master-0 kubenswrapper[19715]: I0313 13:03:13.533788 19715 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.533910 master-0 kubenswrapper[19715]: I0313 13:03:13.533798 19715 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.533910 master-0 kubenswrapper[19715]: I0313 13:03:13.533808 19715 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/14a12380-2a76-4a18-bcc6-71c2f2949270-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.533910 master-0 kubenswrapper[19715]: I0313 13:03:13.533820 19715 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/21e7b1a8-5baa-406f-9769-bcefec1ec69a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:13.560786 master-0 kubenswrapper[19715]: I0313 13:03:13.560673 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" event={"ID":"14a12380-2a76-4a18-bcc6-71c2f2949270","Type":"ContainerDied","Data":"170ec31df40cf6ad887d63790982af33ffe390ce70884583ae8f98b7e5a55dce"} Mar 13 13:03:13.560786 master-0 kubenswrapper[19715]: I0313 13:03:13.560732 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="170ec31df40cf6ad887d63790982af33ffe390ce70884583ae8f98b7e5a55dce" Mar 13 13:03:13.561216 master-0 kubenswrapper[19715]: I0313 13:03:13.560854 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874dpbdc" Mar 13 13:03:13.565115 master-0 kubenswrapper[19715]: I0313 13:03:13.565065 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" event={"ID":"21e7b1a8-5baa-406f-9769-bcefec1ec69a","Type":"ContainerDied","Data":"357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c"} Mar 13 13:03:13.565115 master-0 kubenswrapper[19715]: I0313 13:03:13.565106 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="357af279e3e7902943c07661dbfc179d0fce5e518bd660f3aa8b5600bf05f08c" Mar 13 13:03:13.565354 master-0 kubenswrapper[19715]: I0313 13:03:13.565173 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5hkjsx" Mar 13 13:03:13.571736 master-0 kubenswrapper[19715]: I0313 13:03:13.571617 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" event={"ID":"f8bb4265-31d7-459e-aea3-fc4087bdd1f7","Type":"ContainerDied","Data":"9060420de5f2157e6e57b10cd501f500a0b8f4c7ef32f1eb507282bbe50e8bb0"} Mar 13 13:03:13.571736 master-0 kubenswrapper[19715]: I0313 13:03:13.571700 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9060420de5f2157e6e57b10cd501f500a0b8f4c7ef32f1eb507282bbe50e8bb0" Mar 13 13:03:13.572372 master-0 kubenswrapper[19715]: I0313 13:03:13.571813 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jhx8s" Mar 13 13:03:16.041033 master-0 kubenswrapper[19715]: I0313 13:03:16.040932 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k"] Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041391 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041416 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041431 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041438 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041462 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041471 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041481 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041488 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041502 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041508 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="extract" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041523 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041529 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="util" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041544 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="pull" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041550 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="pull" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041570 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="pull" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041612 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="pull" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: E0313 13:03:16.041635 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="pull" Mar 13 13:03:16.041695 master-0 kubenswrapper[19715]: I0313 13:03:16.041640 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="pull" Mar 13 13:03:16.042296 master-0 kubenswrapper[19715]: I0313 13:03:16.041839 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="21e7b1a8-5baa-406f-9769-bcefec1ec69a" containerName="extract" Mar 13 13:03:16.042296 master-0 kubenswrapper[19715]: I0313 13:03:16.041865 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="14a12380-2a76-4a18-bcc6-71c2f2949270" containerName="extract" Mar 13 13:03:16.042296 master-0 kubenswrapper[19715]: I0313 13:03:16.041896 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="f8bb4265-31d7-459e-aea3-fc4087bdd1f7" containerName="extract" Mar 13 13:03:16.043269 master-0 kubenswrapper[19715]: I0313 13:03:16.043239 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.081195 master-0 kubenswrapper[19715]: I0313 13:03:16.081119 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k"] Mar 13 13:03:16.090171 master-0 kubenswrapper[19715]: I0313 13:03:16.090096 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.090171 master-0 kubenswrapper[19715]: I0313 13:03:16.090169 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.090518 master-0 kubenswrapper[19715]: I0313 13:03:16.090378 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx88n\" (UniqueName: \"kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.192541 master-0 kubenswrapper[19715]: I0313 13:03:16.192464 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.192945 master-0 kubenswrapper[19715]: I0313 13:03:16.192926 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.193063 master-0 kubenswrapper[19715]: I0313 13:03:16.193048 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dx88n\" (UniqueName: \"kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.193407 master-0 kubenswrapper[19715]: I0313 13:03:16.193344 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.202788 master-0 kubenswrapper[19715]: I0313 13:03:16.193699 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.221605 master-0 kubenswrapper[19715]: I0313 13:03:16.216459 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dx88n\" (UniqueName: \"kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.371705 master-0 kubenswrapper[19715]: I0313 13:03:16.371130 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:16.875517 master-0 kubenswrapper[19715]: I0313 13:03:16.875393 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k"] Mar 13 13:03:17.643067 master-0 kubenswrapper[19715]: I0313 13:03:17.642993 19715 generic.go:334] "Generic (PLEG): container finished" podID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerID="159f2bf0cf15054f83bf6cc4920608ef96b9ebf1c81d694d70e1860642c2ee36" exitCode=0 Mar 13 13:03:17.645637 master-0 kubenswrapper[19715]: I0313 13:03:17.644192 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" event={"ID":"c0c7af55-7d88-41f5-bc41-842be0d8bc83","Type":"ContainerDied","Data":"159f2bf0cf15054f83bf6cc4920608ef96b9ebf1c81d694d70e1860642c2ee36"} Mar 13 13:03:17.645637 master-0 kubenswrapper[19715]: I0313 13:03:17.644251 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" event={"ID":"c0c7af55-7d88-41f5-bc41-842be0d8bc83","Type":"ContainerStarted","Data":"b1bc1db65a3a08aca78ef678ee32eb7397d6bcdc102a9d8e2a99631e8d51e7b1"} Mar 13 13:03:18.396356 master-0 kubenswrapper[19715]: I0313 13:03:18.396189 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4"] Mar 13 13:03:18.397490 master-0 kubenswrapper[19715]: I0313 13:03:18.397443 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.401279 master-0 kubenswrapper[19715]: I0313 13:03:18.401233 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 13 13:03:18.401507 master-0 kubenswrapper[19715]: I0313 13:03:18.401241 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 13 13:03:18.417150 master-0 kubenswrapper[19715]: I0313 13:03:18.417061 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4"] Mar 13 13:03:18.466806 master-0 kubenswrapper[19715]: I0313 13:03:18.466750 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjzmr\" (UniqueName: \"kubernetes.io/projected/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-kube-api-access-cjzmr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.467261 master-0 kubenswrapper[19715]: I0313 13:03:18.467169 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.568983 master-0 kubenswrapper[19715]: I0313 13:03:18.568922 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjzmr\" (UniqueName: \"kubernetes.io/projected/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-kube-api-access-cjzmr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.568983 master-0 kubenswrapper[19715]: I0313 13:03:18.568988 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.569563 master-0 kubenswrapper[19715]: I0313 13:03:18.569537 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.589919 master-0 kubenswrapper[19715]: I0313 13:03:18.589840 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjzmr\" (UniqueName: \"kubernetes.io/projected/f7fee441-ffbd-4fb9-b856-cf6a093d2b14-kube-api-access-cjzmr\") pod \"cert-manager-operator-controller-manager-66c8bdd694-ph2d4\" (UID: \"f7fee441-ffbd-4fb9-b856-cf6a093d2b14\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:18.762633 master-0 kubenswrapper[19715]: I0313 13:03:18.762450 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" Mar 13 13:03:19.274522 master-0 kubenswrapper[19715]: W0313 13:03:19.274469 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7fee441_ffbd_4fb9_b856_cf6a093d2b14.slice/crio-34e4e09960c3b4d9323b378b1ae39dab754c65f00a14607ecfd45f68a4258fca WatchSource:0}: Error finding container 34e4e09960c3b4d9323b378b1ae39dab754c65f00a14607ecfd45f68a4258fca: Status 404 returned error can't find the container with id 34e4e09960c3b4d9323b378b1ae39dab754c65f00a14607ecfd45f68a4258fca Mar 13 13:03:19.276819 master-0 kubenswrapper[19715]: I0313 13:03:19.276747 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4"] Mar 13 13:03:19.659010 master-0 kubenswrapper[19715]: I0313 13:03:19.658933 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" event={"ID":"f7fee441-ffbd-4fb9-b856-cf6a093d2b14","Type":"ContainerStarted","Data":"34e4e09960c3b4d9323b378b1ae39dab754c65f00a14607ecfd45f68a4258fca"} Mar 13 13:03:19.663405 master-0 kubenswrapper[19715]: I0313 13:03:19.663344 19715 generic.go:334] "Generic (PLEG): container finished" podID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerID="e11350e76a16774abae7a81dbaf70fb25a5c9ac628966d205c7a7d1452637671" exitCode=0 Mar 13 13:03:19.663405 master-0 kubenswrapper[19715]: I0313 13:03:19.663406 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" event={"ID":"c0c7af55-7d88-41f5-bc41-842be0d8bc83","Type":"ContainerDied","Data":"e11350e76a16774abae7a81dbaf70fb25a5c9ac628966d205c7a7d1452637671"} Mar 13 13:03:20.680160 master-0 kubenswrapper[19715]: I0313 13:03:20.679964 19715 generic.go:334] "Generic (PLEG): container finished" podID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerID="548eb2dcc0de2c8fa2e49759c4fa1f66441a62adc85c32b94883eadb1cbc1aa4" exitCode=0 Mar 13 13:03:20.680160 master-0 kubenswrapper[19715]: I0313 13:03:20.680052 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" event={"ID":"c0c7af55-7d88-41f5-bc41-842be0d8bc83","Type":"ContainerDied","Data":"548eb2dcc0de2c8fa2e49759c4fa1f66441a62adc85c32b94883eadb1cbc1aa4"} Mar 13 13:03:23.542242 master-0 kubenswrapper[19715]: I0313 13:03:23.542183 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:23.675448 master-0 kubenswrapper[19715]: I0313 13:03:23.675356 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dx88n\" (UniqueName: \"kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n\") pod \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " Mar 13 13:03:23.675768 master-0 kubenswrapper[19715]: I0313 13:03:23.675486 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util\") pod \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " Mar 13 13:03:23.675768 master-0 kubenswrapper[19715]: I0313 13:03:23.675592 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle\") pod \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\" (UID: \"c0c7af55-7d88-41f5-bc41-842be0d8bc83\") " Mar 13 13:03:23.679688 master-0 kubenswrapper[19715]: I0313 13:03:23.678428 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle" (OuterVolumeSpecName: "bundle") pod "c0c7af55-7d88-41f5-bc41-842be0d8bc83" (UID: "c0c7af55-7d88-41f5-bc41-842be0d8bc83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:23.682988 master-0 kubenswrapper[19715]: I0313 13:03:23.682899 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n" (OuterVolumeSpecName: "kube-api-access-dx88n") pod "c0c7af55-7d88-41f5-bc41-842be0d8bc83" (UID: "c0c7af55-7d88-41f5-bc41-842be0d8bc83"). InnerVolumeSpecName "kube-api-access-dx88n". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:03:23.685391 master-0 kubenswrapper[19715]: I0313 13:03:23.685323 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util" (OuterVolumeSpecName: "util") pod "c0c7af55-7d88-41f5-bc41-842be0d8bc83" (UID: "c0c7af55-7d88-41f5-bc41-842be0d8bc83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 13:03:23.718681 master-0 kubenswrapper[19715]: I0313 13:03:23.718560 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" event={"ID":"c0c7af55-7d88-41f5-bc41-842be0d8bc83","Type":"ContainerDied","Data":"b1bc1db65a3a08aca78ef678ee32eb7397d6bcdc102a9d8e2a99631e8d51e7b1"} Mar 13 13:03:23.718681 master-0 kubenswrapper[19715]: I0313 13:03:23.718658 19715 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1bc1db65a3a08aca78ef678ee32eb7397d6bcdc102a9d8e2a99631e8d51e7b1" Mar 13 13:03:23.719183 master-0 kubenswrapper[19715]: I0313 13:03:23.718790 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08nt42k" Mar 13 13:03:23.778609 master-0 kubenswrapper[19715]: I0313 13:03:23.778169 19715 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-util\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:23.778781 master-0 kubenswrapper[19715]: I0313 13:03:23.778760 19715 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/c0c7af55-7d88-41f5-bc41-842be0d8bc83-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:23.778905 master-0 kubenswrapper[19715]: I0313 13:03:23.778892 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dx88n\" (UniqueName: \"kubernetes.io/projected/c0c7af55-7d88-41f5-bc41-842be0d8bc83-kube-api-access-dx88n\") on node \"master-0\" DevicePath \"\"" Mar 13 13:03:24.749020 master-0 kubenswrapper[19715]: I0313 13:03:24.748931 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" event={"ID":"f7fee441-ffbd-4fb9-b856-cf6a093d2b14","Type":"ContainerStarted","Data":"05c9c54a5f280702ed13bd534171519dc88b74ab4196ff7769a57bf73ebbb20a"} Mar 13 13:03:24.787980 master-0 kubenswrapper[19715]: I0313 13:03:24.787838 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-ph2d4" podStartSLOduration=2.48531822 podStartE2EDuration="6.787799799s" podCreationTimestamp="2026-03-13 13:03:18 +0000 UTC" firstStartedPulling="2026-03-13 13:03:19.277349437 +0000 UTC m=+825.844022204" lastFinishedPulling="2026-03-13 13:03:23.579831026 +0000 UTC m=+830.146503783" observedRunningTime="2026-03-13 13:03:24.774898775 +0000 UTC m=+831.341571562" watchObservedRunningTime="2026-03-13 13:03:24.787799799 +0000 UTC m=+831.354472556" Mar 13 13:03:27.000186 master-0 kubenswrapper[19715]: I0313 13:03:27.000117 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-vgc8q"] Mar 13 13:03:27.001203 master-0 kubenswrapper[19715]: E0313 13:03:27.001182 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="extract" Mar 13 13:03:27.001290 master-0 kubenswrapper[19715]: I0313 13:03:27.001278 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="extract" Mar 13 13:03:27.001365 master-0 kubenswrapper[19715]: E0313 13:03:27.001354 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="util" Mar 13 13:03:27.001434 master-0 kubenswrapper[19715]: I0313 13:03:27.001424 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="util" Mar 13 13:03:27.001506 master-0 kubenswrapper[19715]: E0313 13:03:27.001496 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="pull" Mar 13 13:03:27.001569 master-0 kubenswrapper[19715]: I0313 13:03:27.001559 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="pull" Mar 13 13:03:27.001960 master-0 kubenswrapper[19715]: I0313 13:03:27.001935 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="c0c7af55-7d88-41f5-bc41-842be0d8bc83" containerName="extract" Mar 13 13:03:27.002608 master-0 kubenswrapper[19715]: I0313 13:03:27.002563 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.004734 master-0 kubenswrapper[19715]: I0313 13:03:27.004692 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 13:03:27.005001 master-0 kubenswrapper[19715]: I0313 13:03:27.004968 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 13:03:27.019859 master-0 kubenswrapper[19715]: I0313 13:03:27.019094 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-vgc8q"] Mar 13 13:03:27.041520 master-0 kubenswrapper[19715]: I0313 13:03:27.041438 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp7fg\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-kube-api-access-pp7fg\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.041936 master-0 kubenswrapper[19715]: I0313 13:03:27.041741 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.143990 master-0 kubenswrapper[19715]: I0313 13:03:27.143871 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pp7fg\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-kube-api-access-pp7fg\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.143990 master-0 kubenswrapper[19715]: I0313 13:03:27.143974 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.186607 master-0 kubenswrapper[19715]: I0313 13:03:27.184420 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pp7fg\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-kube-api-access-pp7fg\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.186607 master-0 kubenswrapper[19715]: I0313 13:03:27.186309 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/443af3f8-080e-4540-8496-ef84da64a98e-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-vgc8q\" (UID: \"443af3f8-080e-4540-8496-ef84da64a98e\") " pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.337332 master-0 kubenswrapper[19715]: I0313 13:03:27.337178 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:27.997190 master-0 kubenswrapper[19715]: I0313 13:03:27.997077 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-vgc8q"] Mar 13 13:03:28.007760 master-0 kubenswrapper[19715]: W0313 13:03:28.007678 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod443af3f8_080e_4540_8496_ef84da64a98e.slice/crio-5ba9bff195cd48b62f3a72eedf4e10dfec6f7a8264d931f6a82ebecf0f49b091 WatchSource:0}: Error finding container 5ba9bff195cd48b62f3a72eedf4e10dfec6f7a8264d931f6a82ebecf0f49b091: Status 404 returned error can't find the container with id 5ba9bff195cd48b62f3a72eedf4e10dfec6f7a8264d931f6a82ebecf0f49b091 Mar 13 13:03:28.813725 master-0 kubenswrapper[19715]: I0313 13:03:28.813635 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" event={"ID":"443af3f8-080e-4540-8496-ef84da64a98e","Type":"ContainerStarted","Data":"5ba9bff195cd48b62f3a72eedf4e10dfec6f7a8264d931f6a82ebecf0f49b091"} Mar 13 13:03:30.376488 master-0 kubenswrapper[19715]: I0313 13:03:30.376406 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kwnsl"] Mar 13 13:03:30.384128 master-0 kubenswrapper[19715]: I0313 13:03:30.379115 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.391549 master-0 kubenswrapper[19715]: I0313 13:03:30.391466 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kwnsl"] Mar 13 13:03:30.411674 master-0 kubenswrapper[19715]: I0313 13:03:30.410921 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.411674 master-0 kubenswrapper[19715]: I0313 13:03:30.411033 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j962t\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-kube-api-access-j962t\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.515355 master-0 kubenswrapper[19715]: I0313 13:03:30.515260 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.515723 master-0 kubenswrapper[19715]: I0313 13:03:30.515401 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j962t\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-kube-api-access-j962t\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.534394 master-0 kubenswrapper[19715]: I0313 13:03:30.533060 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.537938 master-0 kubenswrapper[19715]: I0313 13:03:30.536415 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j962t\" (UniqueName: \"kubernetes.io/projected/e4124d51-e35d-4e96-ab7c-ea9f9f031826-kube-api-access-j962t\") pod \"cert-manager-cainjector-5545bd876-kwnsl\" (UID: \"e4124d51-e35d-4e96-ab7c-ea9f9f031826\") " pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:30.714606 master-0 kubenswrapper[19715]: I0313 13:03:30.711996 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" Mar 13 13:03:31.124834 master-0 kubenswrapper[19715]: I0313 13:03:31.121950 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bsppq"] Mar 13 13:03:31.124834 master-0 kubenswrapper[19715]: I0313 13:03:31.124168 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" Mar 13 13:03:31.145630 master-0 kubenswrapper[19715]: I0313 13:03:31.135211 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 13:03:31.145630 master-0 kubenswrapper[19715]: I0313 13:03:31.135504 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 13:03:31.159621 master-0 kubenswrapper[19715]: I0313 13:03:31.148358 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-877wd\" (UniqueName: \"kubernetes.io/projected/42ae4c26-cb33-47a7-b53b-b88f395f06e0-kube-api-access-877wd\") pod \"nmstate-operator-796d4cfff4-bsppq\" (UID: \"42ae4c26-cb33-47a7-b53b-b88f395f06e0\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" Mar 13 13:03:31.159621 master-0 kubenswrapper[19715]: I0313 13:03:31.148830 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bsppq"] Mar 13 13:03:31.253472 master-0 kubenswrapper[19715]: I0313 13:03:31.252635 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-877wd\" (UniqueName: \"kubernetes.io/projected/42ae4c26-cb33-47a7-b53b-b88f395f06e0-kube-api-access-877wd\") pod \"nmstate-operator-796d4cfff4-bsppq\" (UID: \"42ae4c26-cb33-47a7-b53b-b88f395f06e0\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" Mar 13 13:03:31.262208 master-0 kubenswrapper[19715]: I0313 13:03:31.258952 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-kwnsl"] Mar 13 13:03:31.281295 master-0 kubenswrapper[19715]: I0313 13:03:31.278730 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-877wd\" (UniqueName: \"kubernetes.io/projected/42ae4c26-cb33-47a7-b53b-b88f395f06e0-kube-api-access-877wd\") pod \"nmstate-operator-796d4cfff4-bsppq\" (UID: \"42ae4c26-cb33-47a7-b53b-b88f395f06e0\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" Mar 13 13:03:31.544318 master-0 kubenswrapper[19715]: I0313 13:03:31.544231 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" Mar 13 13:03:31.864343 master-0 kubenswrapper[19715]: I0313 13:03:31.863873 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" event={"ID":"e4124d51-e35d-4e96-ab7c-ea9f9f031826","Type":"ContainerStarted","Data":"603f005625010d5cc080a849e25781c5b09c9407b67c8f99bd6e664195218163"} Mar 13 13:03:32.070467 master-0 kubenswrapper[19715]: I0313 13:03:32.064583 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-bsppq"] Mar 13 13:03:34.529374 master-0 kubenswrapper[19715]: W0313 13:03:34.529253 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod42ae4c26_cb33_47a7_b53b_b88f395f06e0.slice/crio-ece9b7ff595403b0ff9e394fb4ef37f2c73f1e903e45bd23f328ea984f441b79 WatchSource:0}: Error finding container ece9b7ff595403b0ff9e394fb4ef37f2c73f1e903e45bd23f328ea984f441b79: Status 404 returned error can't find the container with id ece9b7ff595403b0ff9e394fb4ef37f2c73f1e903e45bd23f328ea984f441b79 Mar 13 13:03:34.910206 master-0 kubenswrapper[19715]: I0313 13:03:34.910125 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" event={"ID":"42ae4c26-cb33-47a7-b53b-b88f395f06e0","Type":"ContainerStarted","Data":"ece9b7ff595403b0ff9e394fb4ef37f2c73f1e903e45bd23f328ea984f441b79"} Mar 13 13:03:34.912203 master-0 kubenswrapper[19715]: I0313 13:03:34.912140 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" event={"ID":"443af3f8-080e-4540-8496-ef84da64a98e","Type":"ContainerStarted","Data":"461ed69f47b3cf66f8ad27ed12a1ddea905c2e062fa1374df38fb011a240706d"} Mar 13 13:03:34.913237 master-0 kubenswrapper[19715]: I0313 13:03:34.913209 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:34.915930 master-0 kubenswrapper[19715]: I0313 13:03:34.915862 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" event={"ID":"e4124d51-e35d-4e96-ab7c-ea9f9f031826","Type":"ContainerStarted","Data":"3a4cf5d22ea5071da86b93a0c03d61d1db2fc9571e690264be1c83ec940be833"} Mar 13 13:03:34.942013 master-0 kubenswrapper[19715]: I0313 13:03:34.941917 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" podStartSLOduration=2.289065607 podStartE2EDuration="8.941890927s" podCreationTimestamp="2026-03-13 13:03:26 +0000 UTC" firstStartedPulling="2026-03-13 13:03:28.012159295 +0000 UTC m=+834.578832052" lastFinishedPulling="2026-03-13 13:03:34.664984615 +0000 UTC m=+841.231657372" observedRunningTime="2026-03-13 13:03:34.9394189 +0000 UTC m=+841.506091657" watchObservedRunningTime="2026-03-13 13:03:34.941890927 +0000 UTC m=+841.508563684" Mar 13 13:03:34.971229 master-0 kubenswrapper[19715]: I0313 13:03:34.970782 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-kwnsl" podStartSLOduration=1.6314406030000002 podStartE2EDuration="4.970756552s" podCreationTimestamp="2026-03-13 13:03:30 +0000 UTC" firstStartedPulling="2026-03-13 13:03:31.272342394 +0000 UTC m=+837.839015151" lastFinishedPulling="2026-03-13 13:03:34.611658333 +0000 UTC m=+841.178331100" observedRunningTime="2026-03-13 13:03:34.964375482 +0000 UTC m=+841.531048249" watchObservedRunningTime="2026-03-13 13:03:34.970756552 +0000 UTC m=+841.537429309" Mar 13 13:03:37.079743 master-0 kubenswrapper[19715]: I0313 13:03:37.077026 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9"] Mar 13 13:03:37.084630 master-0 kubenswrapper[19715]: I0313 13:03:37.084062 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.093265 master-0 kubenswrapper[19715]: I0313 13:03:37.092765 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 13:03:37.093265 master-0 kubenswrapper[19715]: I0313 13:03:37.093023 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 13:03:37.093265 master-0 kubenswrapper[19715]: I0313 13:03:37.093115 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 13:03:37.093265 master-0 kubenswrapper[19715]: I0313 13:03:37.093233 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 13:03:37.100616 master-0 kubenswrapper[19715]: I0313 13:03:37.098242 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9"] Mar 13 13:03:37.136664 master-0 kubenswrapper[19715]: I0313 13:03:37.136546 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.137031 master-0 kubenswrapper[19715]: I0313 13:03:37.136744 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkx5c\" (UniqueName: \"kubernetes.io/projected/7ce08f6e-9720-4e70-bba0-f8a56161dc15-kube-api-access-kkx5c\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.137031 master-0 kubenswrapper[19715]: I0313 13:03:37.136775 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-webhook-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.241688 master-0 kubenswrapper[19715]: I0313 13:03:37.241604 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.242119 master-0 kubenswrapper[19715]: I0313 13:03:37.241752 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kkx5c\" (UniqueName: \"kubernetes.io/projected/7ce08f6e-9720-4e70-bba0-f8a56161dc15-kube-api-access-kkx5c\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.242119 master-0 kubenswrapper[19715]: I0313 13:03:37.241776 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-webhook-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.260687 master-0 kubenswrapper[19715]: I0313 13:03:37.260240 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-apiservice-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.276850 master-0 kubenswrapper[19715]: I0313 13:03:37.273714 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/7ce08f6e-9720-4e70-bba0-f8a56161dc15-webhook-cert\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.303663 master-0 kubenswrapper[19715]: I0313 13:03:37.301874 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kkx5c\" (UniqueName: \"kubernetes.io/projected/7ce08f6e-9720-4e70-bba0-f8a56161dc15-kube-api-access-kkx5c\") pod \"metallb-operator-controller-manager-6c7688d46-wm7m9\" (UID: \"7ce08f6e-9720-4e70-bba0-f8a56161dc15\") " pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.492148 master-0 kubenswrapper[19715]: I0313 13:03:37.489555 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:37.855766 master-0 kubenswrapper[19715]: I0313 13:03:37.853104 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv"] Mar 13 13:03:37.859859 master-0 kubenswrapper[19715]: I0313 13:03:37.859742 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:37.867325 master-0 kubenswrapper[19715]: I0313 13:03:37.867271 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 13:03:37.873469 master-0 kubenswrapper[19715]: I0313 13:03:37.873428 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 13:03:37.916133 master-0 kubenswrapper[19715]: I0313 13:03:37.903410 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv"] Mar 13 13:03:37.972481 master-0 kubenswrapper[19715]: I0313 13:03:37.970611 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-apiservice-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:37.972481 master-0 kubenswrapper[19715]: I0313 13:03:37.970812 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-webhook-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:37.972481 master-0 kubenswrapper[19715]: I0313 13:03:37.970910 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdg8k\" (UniqueName: \"kubernetes.io/projected/4f6e1dd7-43c5-4906-b16f-627418cfe501-kube-api-access-gdg8k\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.078034 master-0 kubenswrapper[19715]: I0313 13:03:38.077964 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-webhook-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.078034 master-0 kubenswrapper[19715]: I0313 13:03:38.078026 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gdg8k\" (UniqueName: \"kubernetes.io/projected/4f6e1dd7-43c5-4906-b16f-627418cfe501-kube-api-access-gdg8k\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.078485 master-0 kubenswrapper[19715]: I0313 13:03:38.078154 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-apiservice-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.083788 master-0 kubenswrapper[19715]: I0313 13:03:38.083736 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-webhook-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.094685 master-0 kubenswrapper[19715]: I0313 13:03:38.094582 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/4f6e1dd7-43c5-4906-b16f-627418cfe501-apiservice-cert\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.100575 master-0 kubenswrapper[19715]: I0313 13:03:38.100502 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdg8k\" (UniqueName: \"kubernetes.io/projected/4f6e1dd7-43c5-4906-b16f-627418cfe501-kube-api-access-gdg8k\") pod \"metallb-operator-webhook-server-7568db4689-9tdfv\" (UID: \"4f6e1dd7-43c5-4906-b16f-627418cfe501\") " pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:38.209613 master-0 kubenswrapper[19715]: I0313 13:03:38.209477 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:40.138993 master-0 kubenswrapper[19715]: I0313 13:03:40.136208 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" event={"ID":"42ae4c26-cb33-47a7-b53b-b88f395f06e0","Type":"ContainerStarted","Data":"91ed98a19ae30bbe9def31650ca2dcd660b739b05eb585b0c8f1ba17ee4e569c"} Mar 13 13:03:40.308987 master-0 kubenswrapper[19715]: I0313 13:03:40.308566 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv"] Mar 13 13:03:40.347677 master-0 kubenswrapper[19715]: W0313 13:03:40.347560 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f6e1dd7_43c5_4906_b16f_627418cfe501.slice/crio-5614a399522c59cce50a69a2034adc3de41a2051b08e7f1bd9b4401031b7b653 WatchSource:0}: Error finding container 5614a399522c59cce50a69a2034adc3de41a2051b08e7f1bd9b4401031b7b653: Status 404 returned error can't find the container with id 5614a399522c59cce50a69a2034adc3de41a2051b08e7f1bd9b4401031b7b653 Mar 13 13:03:40.353619 master-0 kubenswrapper[19715]: I0313 13:03:40.353435 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-bsppq" podStartSLOduration=4.254730805 podStartE2EDuration="9.353394977s" podCreationTimestamp="2026-03-13 13:03:31 +0000 UTC" firstStartedPulling="2026-03-13 13:03:34.531605953 +0000 UTC m=+841.098278710" lastFinishedPulling="2026-03-13 13:03:39.630270125 +0000 UTC m=+846.196942882" observedRunningTime="2026-03-13 13:03:40.332716719 +0000 UTC m=+846.899389496" watchObservedRunningTime="2026-03-13 13:03:40.353394977 +0000 UTC m=+846.920067734" Mar 13 13:03:40.559611 master-0 kubenswrapper[19715]: W0313 13:03:40.556922 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ce08f6e_9720_4e70_bba0_f8a56161dc15.slice/crio-5786dd55717b2bdf4678347791d3433e91bda3d92db42664abb6ab310c768bc4 WatchSource:0}: Error finding container 5786dd55717b2bdf4678347791d3433e91bda3d92db42664abb6ab310c768bc4: Status 404 returned error can't find the container with id 5786dd55717b2bdf4678347791d3433e91bda3d92db42664abb6ab310c768bc4 Mar 13 13:03:40.573617 master-0 kubenswrapper[19715]: I0313 13:03:40.572604 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9"] Mar 13 13:03:41.172177 master-0 kubenswrapper[19715]: I0313 13:03:41.169379 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" event={"ID":"4f6e1dd7-43c5-4906-b16f-627418cfe501","Type":"ContainerStarted","Data":"5614a399522c59cce50a69a2034adc3de41a2051b08e7f1bd9b4401031b7b653"} Mar 13 13:03:41.184902 master-0 kubenswrapper[19715]: I0313 13:03:41.184738 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" event={"ID":"7ce08f6e-9720-4e70-bba0-f8a56161dc15","Type":"ContainerStarted","Data":"5786dd55717b2bdf4678347791d3433e91bda3d92db42664abb6ab310c768bc4"} Mar 13 13:03:42.351857 master-0 kubenswrapper[19715]: I0313 13:03:42.351725 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-vgc8q" Mar 13 13:03:45.435623 master-0 kubenswrapper[19715]: I0313 13:03:45.434036 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc"] Mar 13 13:03:45.436603 master-0 kubenswrapper[19715]: I0313 13:03:45.435921 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" Mar 13 13:03:45.442189 master-0 kubenswrapper[19715]: I0313 13:03:45.442115 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 13 13:03:45.442439 master-0 kubenswrapper[19715]: I0313 13:03:45.442417 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 13 13:03:45.467861 master-0 kubenswrapper[19715]: I0313 13:03:45.467782 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc"] Mar 13 13:03:45.604619 master-0 kubenswrapper[19715]: I0313 13:03:45.600424 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jq4d4\" (UniqueName: \"kubernetes.io/projected/b98acd6f-01ee-4862-ba43-72fa7b00c7da-kube-api-access-jq4d4\") pod \"obo-prometheus-operator-68bc856cb9-dvqhc\" (UID: \"b98acd6f-01ee-4862-ba43-72fa7b00c7da\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" Mar 13 13:03:45.707164 master-0 kubenswrapper[19715]: I0313 13:03:45.706921 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jq4d4\" (UniqueName: \"kubernetes.io/projected/b98acd6f-01ee-4862-ba43-72fa7b00c7da-kube-api-access-jq4d4\") pod \"obo-prometheus-operator-68bc856cb9-dvqhc\" (UID: \"b98acd6f-01ee-4862-ba43-72fa7b00c7da\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" Mar 13 13:03:45.732403 master-0 kubenswrapper[19715]: I0313 13:03:45.732322 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5"] Mar 13 13:03:45.734939 master-0 kubenswrapper[19715]: I0313 13:03:45.734905 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.739324 master-0 kubenswrapper[19715]: I0313 13:03:45.739274 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 13 13:03:45.755628 master-0 kubenswrapper[19715]: I0313 13:03:45.750734 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s"] Mar 13 13:03:45.755628 master-0 kubenswrapper[19715]: I0313 13:03:45.752321 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:45.786405 master-0 kubenswrapper[19715]: I0313 13:03:45.782487 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5"] Mar 13 13:03:45.799100 master-0 kubenswrapper[19715]: I0313 13:03:45.792712 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jq4d4\" (UniqueName: \"kubernetes.io/projected/b98acd6f-01ee-4862-ba43-72fa7b00c7da-kube-api-access-jq4d4\") pod \"obo-prometheus-operator-68bc856cb9-dvqhc\" (UID: \"b98acd6f-01ee-4862-ba43-72fa7b00c7da\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" Mar 13 13:03:45.813956 master-0 kubenswrapper[19715]: I0313 13:03:45.809942 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.813956 master-0 kubenswrapper[19715]: I0313 13:03:45.810029 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.813956 master-0 kubenswrapper[19715]: I0313 13:03:45.812830 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" Mar 13 13:03:45.836568 master-0 kubenswrapper[19715]: I0313 13:03:45.833637 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s"] Mar 13 13:03:45.914025 master-0 kubenswrapper[19715]: I0313 13:03:45.913277 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:45.914025 master-0 kubenswrapper[19715]: I0313 13:03:45.913362 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:45.914025 master-0 kubenswrapper[19715]: I0313 13:03:45.913422 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.914025 master-0 kubenswrapper[19715]: I0313 13:03:45.913451 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.919029 master-0 kubenswrapper[19715]: I0313 13:03:45.918974 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.924766 master-0 kubenswrapper[19715]: I0313 13:03:45.924711 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/68d08c55-918b-436a-9da6-7e1998d0c415-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-c25v5\" (UID: \"68d08c55-918b-436a-9da6-7e1998d0c415\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:45.963484 master-0 kubenswrapper[19715]: I0313 13:03:45.963288 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2dmz5"] Mar 13 13:03:45.965642 master-0 kubenswrapper[19715]: I0313 13:03:45.965619 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:45.973191 master-0 kubenswrapper[19715]: I0313 13:03:45.971276 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 13 13:03:45.982718 master-0 kubenswrapper[19715]: I0313 13:03:45.981414 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2dmz5"] Mar 13 13:03:46.017625 master-0 kubenswrapper[19715]: I0313 13:03:46.017507 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:46.018059 master-0 kubenswrapper[19715]: I0313 13:03:46.017713 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:46.029923 master-0 kubenswrapper[19715]: I0313 13:03:46.029842 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:46.040294 master-0 kubenswrapper[19715]: I0313 13:03:46.039366 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3d7479b8-c5f1-4cd7-8bab-80addabf411a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-7d64756467-6lc6s\" (UID: \"3d7479b8-c5f1-4cd7-8bab-80addabf411a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:46.111040 master-0 kubenswrapper[19715]: I0313 13:03:46.110669 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" Mar 13 13:03:46.122391 master-0 kubenswrapper[19715]: I0313 13:03:46.122303 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5zr\" (UniqueName: \"kubernetes.io/projected/1033c510-5024-4164-89af-53acbd4dbe1c-kube-api-access-9d5zr\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.122540 master-0 kubenswrapper[19715]: I0313 13:03:46.122428 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033c510-5024-4164-89af-53acbd4dbe1c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.154742 master-0 kubenswrapper[19715]: I0313 13:03:46.154618 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" Mar 13 13:03:46.220850 master-0 kubenswrapper[19715]: I0313 13:03:46.220630 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s2769"] Mar 13 13:03:46.224619 master-0 kubenswrapper[19715]: I0313 13:03:46.222282 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.224619 master-0 kubenswrapper[19715]: I0313 13:03:46.223995 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5zr\" (UniqueName: \"kubernetes.io/projected/1033c510-5024-4164-89af-53acbd4dbe1c-kube-api-access-9d5zr\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.224619 master-0 kubenswrapper[19715]: I0313 13:03:46.224328 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033c510-5024-4164-89af-53acbd4dbe1c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.241340 master-0 kubenswrapper[19715]: I0313 13:03:46.241204 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/1033c510-5024-4164-89af-53acbd4dbe1c-observability-operator-tls\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.268838 master-0 kubenswrapper[19715]: I0313 13:03:46.267493 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s2769"] Mar 13 13:03:46.286709 master-0 kubenswrapper[19715]: I0313 13:03:46.286188 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5zr\" (UniqueName: \"kubernetes.io/projected/1033c510-5024-4164-89af-53acbd4dbe1c-kube-api-access-9d5zr\") pod \"observability-operator-59bdc8b94-2dmz5\" (UID: \"1033c510-5024-4164-89af-53acbd4dbe1c\") " pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.317674 master-0 kubenswrapper[19715]: I0313 13:03:46.315251 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:03:46.330016 master-0 kubenswrapper[19715]: I0313 13:03:46.327648 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v87s\" (UniqueName: \"kubernetes.io/projected/68f8da86-43bb-4465-b4af-701321b0d5c6-kube-api-access-2v87s\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.330016 master-0 kubenswrapper[19715]: I0313 13:03:46.327784 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68f8da86-43bb-4465-b4af-701321b0d5c6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.450773 master-0 kubenswrapper[19715]: I0313 13:03:46.439401 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2v87s\" (UniqueName: \"kubernetes.io/projected/68f8da86-43bb-4465-b4af-701321b0d5c6-kube-api-access-2v87s\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.450773 master-0 kubenswrapper[19715]: I0313 13:03:46.439806 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68f8da86-43bb-4465-b4af-701321b0d5c6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.458081 master-0 kubenswrapper[19715]: I0313 13:03:46.454882 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/68f8da86-43bb-4465-b4af-701321b0d5c6-openshift-service-ca\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.495756 master-0 kubenswrapper[19715]: I0313 13:03:46.495426 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-kt5k7"] Mar 13 13:03:46.498487 master-0 kubenswrapper[19715]: I0313 13:03:46.497764 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.526028 master-0 kubenswrapper[19715]: I0313 13:03:46.525921 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v87s\" (UniqueName: \"kubernetes.io/projected/68f8da86-43bb-4465-b4af-701321b0d5c6-kube-api-access-2v87s\") pod \"perses-operator-5bf474d74f-s2769\" (UID: \"68f8da86-43bb-4465-b4af-701321b0d5c6\") " pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.538395 master-0 kubenswrapper[19715]: I0313 13:03:46.536782 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-kt5k7"] Mar 13 13:03:46.589321 master-0 kubenswrapper[19715]: I0313 13:03:46.589230 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:03:46.646922 master-0 kubenswrapper[19715]: I0313 13:03:46.645894 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-bound-sa-token\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.646922 master-0 kubenswrapper[19715]: I0313 13:03:46.646125 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtjlr\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-kube-api-access-qtjlr\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.750478 master-0 kubenswrapper[19715]: I0313 13:03:46.750263 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qtjlr\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-kube-api-access-qtjlr\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.750478 master-0 kubenswrapper[19715]: I0313 13:03:46.750381 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-bound-sa-token\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.778994 master-0 kubenswrapper[19715]: I0313 13:03:46.778890 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-bound-sa-token\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.784746 master-0 kubenswrapper[19715]: I0313 13:03:46.780316 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtjlr\" (UniqueName: \"kubernetes.io/projected/3dbb64df-70d5-4d39-aefc-3567dc78a35a-kube-api-access-qtjlr\") pod \"cert-manager-545d4d4674-kt5k7\" (UID: \"3dbb64df-70d5-4d39-aefc-3567dc78a35a\") " pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:46.854418 master-0 kubenswrapper[19715]: I0313 13:03:46.854314 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-kt5k7" Mar 13 13:03:51.253421 master-0 kubenswrapper[19715]: I0313 13:03:51.251416 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-2dmz5"] Mar 13 13:03:51.354172 master-0 kubenswrapper[19715]: I0313 13:03:51.354125 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc"] Mar 13 13:03:51.533608 master-0 kubenswrapper[19715]: I0313 13:03:51.532976 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" event={"ID":"b98acd6f-01ee-4862-ba43-72fa7b00c7da","Type":"ContainerStarted","Data":"ca72d1d579dfe9c13faa94838df6594212b4298e78a52240835b33a870bd33b3"} Mar 13 13:03:51.558630 master-0 kubenswrapper[19715]: I0313 13:03:51.545861 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" event={"ID":"7ce08f6e-9720-4e70-bba0-f8a56161dc15","Type":"ContainerStarted","Data":"4056650b396ddef4c35089860a57818cd43a550dcf324a3a10c0a8ac21122ad0"} Mar 13 13:03:51.558630 master-0 kubenswrapper[19715]: I0313 13:03:51.546050 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:03:51.580323 master-0 kubenswrapper[19715]: I0313 13:03:51.574271 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" event={"ID":"1033c510-5024-4164-89af-53acbd4dbe1c","Type":"ContainerStarted","Data":"7b8e8315661b2ad8ec1a1e924cad0f53a5d72777f45312465e35911dba422c69"} Mar 13 13:03:51.580323 master-0 kubenswrapper[19715]: I0313 13:03:51.575998 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" event={"ID":"4f6e1dd7-43c5-4906-b16f-627418cfe501","Type":"ContainerStarted","Data":"d975c92cbb380a196a6d837b301b9eb14ab27cef9261c73c790216f19f3ebd83"} Mar 13 13:03:51.601621 master-0 kubenswrapper[19715]: I0313 13:03:51.583981 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:03:51.679957 master-0 kubenswrapper[19715]: I0313 13:03:51.670075 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-s2769"] Mar 13 13:03:51.723179 master-0 kubenswrapper[19715]: I0313 13:03:51.717316 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" podStartSLOduration=4.501386671 podStartE2EDuration="14.717283056s" podCreationTimestamp="2026-03-13 13:03:37 +0000 UTC" firstStartedPulling="2026-03-13 13:03:40.561991948 +0000 UTC m=+847.128664705" lastFinishedPulling="2026-03-13 13:03:50.777888333 +0000 UTC m=+857.344561090" observedRunningTime="2026-03-13 13:03:51.601507487 +0000 UTC m=+858.168180254" watchObservedRunningTime="2026-03-13 13:03:51.717283056 +0000 UTC m=+858.283955813" Mar 13 13:03:51.736024 master-0 kubenswrapper[19715]: W0313 13:03:51.734152 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68d08c55_918b_436a_9da6_7e1998d0c415.slice/crio-874baeb143a13454e8352e6eaeb106bd752b1510f007d556027a652800c6591b WatchSource:0}: Error finding container 874baeb143a13454e8352e6eaeb106bd752b1510f007d556027a652800c6591b: Status 404 returned error can't find the container with id 874baeb143a13454e8352e6eaeb106bd752b1510f007d556027a652800c6591b Mar 13 13:03:51.787308 master-0 kubenswrapper[19715]: I0313 13:03:51.778123 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-kt5k7"] Mar 13 13:03:51.787308 master-0 kubenswrapper[19715]: I0313 13:03:51.783622 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" podStartSLOduration=4.305563021 podStartE2EDuration="14.783597105s" podCreationTimestamp="2026-03-13 13:03:37 +0000 UTC" firstStartedPulling="2026-03-13 13:03:40.37389403 +0000 UTC m=+846.940566787" lastFinishedPulling="2026-03-13 13:03:50.851928114 +0000 UTC m=+857.418600871" observedRunningTime="2026-03-13 13:03:51.691912981 +0000 UTC m=+858.258585758" watchObservedRunningTime="2026-03-13 13:03:51.783597105 +0000 UTC m=+858.350269872" Mar 13 13:03:51.797840 master-0 kubenswrapper[19715]: I0313 13:03:51.797621 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5"] Mar 13 13:03:51.802379 master-0 kubenswrapper[19715]: I0313 13:03:51.802323 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s"] Mar 13 13:03:52.590899 master-0 kubenswrapper[19715]: I0313 13:03:52.590775 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" event={"ID":"3d7479b8-c5f1-4cd7-8bab-80addabf411a","Type":"ContainerStarted","Data":"ab9eb2e8c457e82f9535ebf9594ce5c36621f56e65bb0f586e58c3a7d6c665ff"} Mar 13 13:03:52.599601 master-0 kubenswrapper[19715]: I0313 13:03:52.594308 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-s2769" event={"ID":"68f8da86-43bb-4465-b4af-701321b0d5c6","Type":"ContainerStarted","Data":"94f01c1cca77a3c34eefbf786e8bb64326f9095c0032d65c1f068bc1f2813eb1"} Mar 13 13:03:52.599601 master-0 kubenswrapper[19715]: I0313 13:03:52.596532 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-kt5k7" event={"ID":"3dbb64df-70d5-4d39-aefc-3567dc78a35a","Type":"ContainerStarted","Data":"e20643e1e7298412340a50eb39e19698fb392c6ea408c706399783df0b71f416"} Mar 13 13:03:52.599601 master-0 kubenswrapper[19715]: I0313 13:03:52.596559 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-kt5k7" event={"ID":"3dbb64df-70d5-4d39-aefc-3567dc78a35a","Type":"ContainerStarted","Data":"427922c355ea5ccfc0d1b8c75e03e3b88cb0cb9f5fd3094a05644a36c845afe1"} Mar 13 13:03:52.599601 master-0 kubenswrapper[19715]: I0313 13:03:52.598977 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" event={"ID":"68d08c55-918b-436a-9da6-7e1998d0c415","Type":"ContainerStarted","Data":"874baeb143a13454e8352e6eaeb106bd752b1510f007d556027a652800c6591b"} Mar 13 13:03:52.651826 master-0 kubenswrapper[19715]: I0313 13:03:52.650800 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-kt5k7" podStartSLOduration=6.650736264 podStartE2EDuration="6.650736264s" podCreationTimestamp="2026-03-13 13:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:03:52.642193426 +0000 UTC m=+859.208866193" watchObservedRunningTime="2026-03-13 13:03:52.650736264 +0000 UTC m=+859.217409021" Mar 13 13:04:06.892653 master-0 kubenswrapper[19715]: I0313 13:04:06.891935 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" event={"ID":"3d7479b8-c5f1-4cd7-8bab-80addabf411a","Type":"ContainerStarted","Data":"b6e1b7bc1f2cf75be156630b5816114dcf934209676c6e8f853217b1f03dcecd"} Mar 13 13:04:06.896501 master-0 kubenswrapper[19715]: I0313 13:04:06.896442 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-s2769" event={"ID":"68f8da86-43bb-4465-b4af-701321b0d5c6","Type":"ContainerStarted","Data":"c554bec006adedb6bf867dfe1e04dab765603fe86760d819dc4c4279b06ac79a"} Mar 13 13:04:06.898275 master-0 kubenswrapper[19715]: I0313 13:04:06.897884 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:04:06.900342 master-0 kubenswrapper[19715]: I0313 13:04:06.900303 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" event={"ID":"1033c510-5024-4164-89af-53acbd4dbe1c","Type":"ContainerStarted","Data":"3387c5f16b7fbbf54ad98f3ebc3a26654c530f73c0b707fbb7289fbcd85e3120"} Mar 13 13:04:06.901783 master-0 kubenswrapper[19715]: I0313 13:04:06.901723 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:04:06.902800 master-0 kubenswrapper[19715]: I0313 13:04:06.902773 19715 patch_prober.go:28] interesting pod/observability-operator-59bdc8b94-2dmz5 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.128.0.122:8081/healthz\": dial tcp 10.128.0.122:8081: connect: connection refused" start-of-body= Mar 13 13:04:06.902984 master-0 kubenswrapper[19715]: I0313 13:04:06.902946 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" podUID="1033c510-5024-4164-89af-53acbd4dbe1c" containerName="operator" probeResult="failure" output="Get \"http://10.128.0.122:8081/healthz\": dial tcp 10.128.0.122:8081: connect: connection refused" Mar 13 13:04:06.906133 master-0 kubenswrapper[19715]: I0313 13:04:06.906081 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" event={"ID":"68d08c55-918b-436a-9da6-7e1998d0c415","Type":"ContainerStarted","Data":"233218ee7a94719e64fc8fe92208d8c3e192b33b35fe0956305552d061c16d70"} Mar 13 13:04:06.912063 master-0 kubenswrapper[19715]: I0313 13:04:06.911958 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" event={"ID":"b98acd6f-01ee-4862-ba43-72fa7b00c7da","Type":"ContainerStarted","Data":"566ed9ae08dfb6471ee09fdc363629f68ac3b01f5a47d57b81e8d93fc4fdc51a"} Mar 13 13:04:06.924152 master-0 kubenswrapper[19715]: I0313 13:04:06.924057 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-6lc6s" podStartSLOduration=7.478785542 podStartE2EDuration="21.924040153s" podCreationTimestamp="2026-03-13 13:03:45 +0000 UTC" firstStartedPulling="2026-03-13 13:03:51.750733395 +0000 UTC m=+858.317406152" lastFinishedPulling="2026-03-13 13:04:06.195988016 +0000 UTC m=+872.762660763" observedRunningTime="2026-03-13 13:04:06.923253139 +0000 UTC m=+873.489925896" watchObservedRunningTime="2026-03-13 13:04:06.924040153 +0000 UTC m=+873.490712910" Mar 13 13:04:07.028465 master-0 kubenswrapper[19715]: I0313 13:04:07.025616 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" podStartSLOduration=7.045064244 podStartE2EDuration="22.025575207s" podCreationTimestamp="2026-03-13 13:03:45 +0000 UTC" firstStartedPulling="2026-03-13 13:03:51.277216989 +0000 UTC m=+857.843889746" lastFinishedPulling="2026-03-13 13:04:06.257727942 +0000 UTC m=+872.824400709" observedRunningTime="2026-03-13 13:04:07.022380086 +0000 UTC m=+873.589052843" watchObservedRunningTime="2026-03-13 13:04:07.025575207 +0000 UTC m=+873.592247964" Mar 13 13:04:07.094237 master-0 kubenswrapper[19715]: I0313 13:04:07.094122 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-dvqhc" podStartSLOduration=7.287269768 podStartE2EDuration="22.094092836s" podCreationTimestamp="2026-03-13 13:03:45 +0000 UTC" firstStartedPulling="2026-03-13 13:03:51.435231463 +0000 UTC m=+858.001904220" lastFinishedPulling="2026-03-13 13:04:06.242054531 +0000 UTC m=+872.808727288" observedRunningTime="2026-03-13 13:04:07.053237214 +0000 UTC m=+873.619909991" watchObservedRunningTime="2026-03-13 13:04:07.094092836 +0000 UTC m=+873.660765593" Mar 13 13:04:07.097800 master-0 kubenswrapper[19715]: I0313 13:04:07.097739 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-s2769" podStartSLOduration=6.526438536 podStartE2EDuration="21.097728409s" podCreationTimestamp="2026-03-13 13:03:46 +0000 UTC" firstStartedPulling="2026-03-13 13:03:51.708931184 +0000 UTC m=+858.275603941" lastFinishedPulling="2026-03-13 13:04:06.280221057 +0000 UTC m=+872.846893814" observedRunningTime="2026-03-13 13:04:07.092549687 +0000 UTC m=+873.659222444" watchObservedRunningTime="2026-03-13 13:04:07.097728409 +0000 UTC m=+873.664401166" Mar 13 13:04:07.138102 master-0 kubenswrapper[19715]: I0313 13:04:07.135629 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-7d64756467-c25v5" podStartSLOduration=7.746451394 podStartE2EDuration="22.135559635s" podCreationTimestamp="2026-03-13 13:03:45 +0000 UTC" firstStartedPulling="2026-03-13 13:03:51.800778974 +0000 UTC m=+858.367451731" lastFinishedPulling="2026-03-13 13:04:06.189887215 +0000 UTC m=+872.756559972" observedRunningTime="2026-03-13 13:04:07.124778738 +0000 UTC m=+873.691451525" watchObservedRunningTime="2026-03-13 13:04:07.135559635 +0000 UTC m=+873.702232402" Mar 13 13:04:07.924440 master-0 kubenswrapper[19715]: I0313 13:04:07.924380 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-2dmz5" Mar 13 13:04:08.220101 master-0 kubenswrapper[19715]: I0313 13:04:08.219912 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-7568db4689-9tdfv" Mar 13 13:04:16.595083 master-0 kubenswrapper[19715]: I0313 13:04:16.594978 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-s2769" Mar 13 13:04:27.572835 master-0 kubenswrapper[19715]: I0313 13:04:27.572610 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6c7688d46-wm7m9" Mar 13 13:04:34.779767 master-0 kubenswrapper[19715]: I0313 13:04:34.779665 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5c4fm"] Mar 13 13:04:34.785337 master-0 kubenswrapper[19715]: I0313 13:04:34.785240 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.792383 master-0 kubenswrapper[19715]: I0313 13:04:34.792302 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 13:04:34.793448 master-0 kubenswrapper[19715]: I0313 13:04:34.793419 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 13:04:34.804790 master-0 kubenswrapper[19715]: I0313 13:04:34.804682 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb"] Mar 13 13:04:34.808653 master-0 kubenswrapper[19715]: I0313 13:04:34.806614 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:34.820523 master-0 kubenswrapper[19715]: I0313 13:04:34.820440 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb"] Mar 13 13:04:34.821986 master-0 kubenswrapper[19715]: I0313 13:04:34.821937 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921154 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6wnx\" (UniqueName: \"kubernetes.io/projected/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-kube-api-access-f6wnx\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921304 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-startup\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921391 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-sockets\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921463 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics-certs\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921492 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-reloader\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921552 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4779057d-1e1c-434d-b197-5401a1bec1e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921644 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-conf\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921675 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:34.922153 master-0 kubenswrapper[19715]: I0313 13:04:34.921715 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8qfv\" (UniqueName: \"kubernetes.io/projected/4779057d-1e1c-434d-b197-5401a1bec1e8-kube-api-access-j8qfv\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:34.965170 master-0 kubenswrapper[19715]: I0313 13:04:34.965073 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-zlvcv"] Mar 13 13:04:34.971338 master-0 kubenswrapper[19715]: I0313 13:04:34.971273 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zlvcv" Mar 13 13:04:34.979151 master-0 kubenswrapper[19715]: I0313 13:04:34.978394 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 13:04:34.979151 master-0 kubenswrapper[19715]: I0313 13:04:34.978630 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 13:04:34.979151 master-0 kubenswrapper[19715]: I0313 13:04:34.978905 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 13:04:34.988512 master-0 kubenswrapper[19715]: I0313 13:04:34.986657 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-gxqx5"] Mar 13 13:04:35.000614 master-0 kubenswrapper[19715]: I0313 13:04:34.997450 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.000614 master-0 kubenswrapper[19715]: I0313 13:04:34.999641 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 13:04:35.010713 master-0 kubenswrapper[19715]: I0313 13:04:35.003681 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-gxqx5"] Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.059648 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics-certs\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.059740 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-reloader\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.059872 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4779057d-1e1c-434d-b197-5401a1bec1e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.059986 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-conf\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.060034 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.060098 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8qfv\" (UniqueName: \"kubernetes.io/projected/4779057d-1e1c-434d-b197-5401a1bec1e8-kube-api-access-j8qfv\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.063749 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6wnx\" (UniqueName: \"kubernetes.io/projected/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-kube-api-access-f6wnx\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.063914 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-startup\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.064001 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-sockets\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.064531 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-reloader\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.065006 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-sockets\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.065304 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.067697 master-0 kubenswrapper[19715]: I0313 13:04:35.066767 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-conf\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.078609 master-0 kubenswrapper[19715]: I0313 13:04:35.072318 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-frr-startup\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.090643 master-0 kubenswrapper[19715]: I0313 13:04:35.086285 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-metrics-certs\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.096739 master-0 kubenswrapper[19715]: I0313 13:04:35.094048 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4779057d-1e1c-434d-b197-5401a1bec1e8-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:35.115603 master-0 kubenswrapper[19715]: I0313 13:04:35.113485 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8qfv\" (UniqueName: \"kubernetes.io/projected/4779057d-1e1c-434d-b197-5401a1bec1e8-kube-api-access-j8qfv\") pod \"frr-k8s-webhook-server-bcc4b6f68-9qbxb\" (UID: \"4779057d-1e1c-434d-b197-5401a1bec1e8\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:35.122970 master-0 kubenswrapper[19715]: I0313 13:04:35.117986 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6wnx\" (UniqueName: \"kubernetes.io/projected/443d9a8a-7c66-4a0e-8d34-5307f6f1ef13-kube-api-access-f6wnx\") pod \"frr-k8s-5c4fm\" (UID: \"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13\") " pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.157551 master-0 kubenswrapper[19715]: I0313 13:04:35.157445 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:35.175811 master-0 kubenswrapper[19715]: I0313 13:04:35.175731 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-metrics-certs\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.175811 master-0 kubenswrapper[19715]: I0313 13:04:35.175818 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ww9\" (UniqueName: \"kubernetes.io/projected/5b3d5495-d012-46ed-9ccc-96ce46655060-kube-api-access-g6ww9\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.176180 master-0 kubenswrapper[19715]: I0313 13:04:35.175857 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae80375f-bbf8-4030-9cd6-f628f080116f-metallb-excludel2\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.176180 master-0 kubenswrapper[19715]: I0313 13:04:35.175887 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-cert\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.176180 master-0 kubenswrapper[19715]: I0313 13:04:35.175936 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.176180 master-0 kubenswrapper[19715]: I0313 13:04:35.175957 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-metrics-certs\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.176180 master-0 kubenswrapper[19715]: I0313 13:04:35.176000 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ae80375f-bbf8-4030-9cd6-f628f080116f-kube-api-access-fsgxm\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.176412 master-0 kubenswrapper[19715]: I0313 13:04:35.176271 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:35.278241 master-0 kubenswrapper[19715]: I0313 13:04:35.278166 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.278645 master-0 kubenswrapper[19715]: I0313 13:04:35.278273 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-metrics-certs\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.278645 master-0 kubenswrapper[19715]: E0313 13:04:35.278417 19715 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 13:04:35.278645 master-0 kubenswrapper[19715]: E0313 13:04:35.278591 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist podName:ae80375f-bbf8-4030-9cd6-f628f080116f nodeName:}" failed. No retries permitted until 2026-03-13 13:04:35.778524889 +0000 UTC m=+902.345197646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist") pod "speaker-zlvcv" (UID: "ae80375f-bbf8-4030-9cd6-f628f080116f") : secret "metallb-memberlist" not found Mar 13 13:04:35.279010 master-0 kubenswrapper[19715]: I0313 13:04:35.278763 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ae80375f-bbf8-4030-9cd6-f628f080116f-kube-api-access-fsgxm\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.279274 master-0 kubenswrapper[19715]: I0313 13:04:35.279239 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-metrics-certs\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.279422 master-0 kubenswrapper[19715]: I0313 13:04:35.279399 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ww9\" (UniqueName: \"kubernetes.io/projected/5b3d5495-d012-46ed-9ccc-96ce46655060-kube-api-access-g6ww9\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.279522 master-0 kubenswrapper[19715]: I0313 13:04:35.279501 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae80375f-bbf8-4030-9cd6-f628f080116f-metallb-excludel2\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.281717 master-0 kubenswrapper[19715]: I0313 13:04:35.279599 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-cert\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.281717 master-0 kubenswrapper[19715]: I0313 13:04:35.281562 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/ae80375f-bbf8-4030-9cd6-f628f080116f-metallb-excludel2\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.284880 master-0 kubenswrapper[19715]: I0313 13:04:35.284666 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-metrics-certs\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.285656 master-0 kubenswrapper[19715]: I0313 13:04:35.285613 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-metrics-certs\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.286851 master-0 kubenswrapper[19715]: I0313 13:04:35.286813 19715 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 13:04:35.298417 master-0 kubenswrapper[19715]: I0313 13:04:35.298338 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5b3d5495-d012-46ed-9ccc-96ce46655060-cert\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.303153 master-0 kubenswrapper[19715]: I0313 13:04:35.303091 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsgxm\" (UniqueName: \"kubernetes.io/projected/ae80375f-bbf8-4030-9cd6-f628f080116f-kube-api-access-fsgxm\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.304665 master-0 kubenswrapper[19715]: I0313 13:04:35.304611 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ww9\" (UniqueName: \"kubernetes.io/projected/5b3d5495-d012-46ed-9ccc-96ce46655060-kube-api-access-g6ww9\") pod \"controller-7bb4cc7c98-gxqx5\" (UID: \"5b3d5495-d012-46ed-9ccc-96ce46655060\") " pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.449840 master-0 kubenswrapper[19715]: I0313 13:04:35.449750 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:35.676857 master-0 kubenswrapper[19715]: I0313 13:04:35.674628 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"63caa4a7a2f58806a52a6cc1eab8ad582f69bd525b3b162ac6b6b4594d6b96fd"} Mar 13 13:04:35.723886 master-0 kubenswrapper[19715]: I0313 13:04:35.723830 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb"] Mar 13 13:04:35.798694 master-0 kubenswrapper[19715]: I0313 13:04:35.798561 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:35.802273 master-0 kubenswrapper[19715]: E0313 13:04:35.802192 19715 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 13:04:35.802409 master-0 kubenswrapper[19715]: E0313 13:04:35.802347 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist podName:ae80375f-bbf8-4030-9cd6-f628f080116f nodeName:}" failed. No retries permitted until 2026-03-13 13:04:36.802320652 +0000 UTC m=+903.368993419 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist") pod "speaker-zlvcv" (UID: "ae80375f-bbf8-4030-9cd6-f628f080116f") : secret "metallb-memberlist" not found Mar 13 13:04:35.957432 master-0 kubenswrapper[19715]: I0313 13:04:35.957379 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-gxqx5"] Mar 13 13:04:35.967166 master-0 kubenswrapper[19715]: W0313 13:04:35.967100 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5b3d5495_d012_46ed_9ccc_96ce46655060.slice/crio-da3d77450a3650ccab3fd035f62731e3d5392f1fc1fed0b7054fd8b76233e6b4 WatchSource:0}: Error finding container da3d77450a3650ccab3fd035f62731e3d5392f1fc1fed0b7054fd8b76233e6b4: Status 404 returned error can't find the container with id da3d77450a3650ccab3fd035f62731e3d5392f1fc1fed0b7054fd8b76233e6b4 Mar 13 13:04:36.686371 master-0 kubenswrapper[19715]: I0313 13:04:36.686270 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" event={"ID":"4779057d-1e1c-434d-b197-5401a1bec1e8","Type":"ContainerStarted","Data":"582814cde6b1180cdf0579f8158778c6cfea7a57db27dac87f21dc9c51d92254"} Mar 13 13:04:36.689024 master-0 kubenswrapper[19715]: I0313 13:04:36.688918 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gxqx5" event={"ID":"5b3d5495-d012-46ed-9ccc-96ce46655060","Type":"ContainerStarted","Data":"f999089dc9878413b9f5eb58f059c5031d4ffc8011468f13c1db0fca05e07465"} Mar 13 13:04:36.689024 master-0 kubenswrapper[19715]: I0313 13:04:36.688974 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gxqx5" event={"ID":"5b3d5495-d012-46ed-9ccc-96ce46655060","Type":"ContainerStarted","Data":"da3d77450a3650ccab3fd035f62731e3d5392f1fc1fed0b7054fd8b76233e6b4"} Mar 13 13:04:36.820243 master-0 kubenswrapper[19715]: I0313 13:04:36.820137 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:36.827604 master-0 kubenswrapper[19715]: I0313 13:04:36.827520 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/ae80375f-bbf8-4030-9cd6-f628f080116f-memberlist\") pod \"speaker-zlvcv\" (UID: \"ae80375f-bbf8-4030-9cd6-f628f080116f\") " pod="metallb-system/speaker-zlvcv" Mar 13 13:04:36.873225 master-0 kubenswrapper[19715]: I0313 13:04:36.873147 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-zlvcv" Mar 13 13:04:36.917680 master-0 kubenswrapper[19715]: W0313 13:04:36.917616 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podae80375f_bbf8_4030_9cd6_f628f080116f.slice/crio-6f72d5c406ff3a273982efb0eec618b5cba621c9a06acd53abbc45a29bf5bec4 WatchSource:0}: Error finding container 6f72d5c406ff3a273982efb0eec618b5cba621c9a06acd53abbc45a29bf5bec4: Status 404 returned error can't find the container with id 6f72d5c406ff3a273982efb0eec618b5cba621c9a06acd53abbc45a29bf5bec4 Mar 13 13:04:37.741105 master-0 kubenswrapper[19715]: I0313 13:04:37.740847 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zlvcv" event={"ID":"ae80375f-bbf8-4030-9cd6-f628f080116f","Type":"ContainerStarted","Data":"f72680aad790bc81e2f562ddbf573891d4d7b950a1adcf948a43f2f735eed725"} Mar 13 13:04:37.741105 master-0 kubenswrapper[19715]: I0313 13:04:37.740910 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zlvcv" event={"ID":"ae80375f-bbf8-4030-9cd6-f628f080116f","Type":"ContainerStarted","Data":"6f72d5c406ff3a273982efb0eec618b5cba621c9a06acd53abbc45a29bf5bec4"} Mar 13 13:04:38.202885 master-0 kubenswrapper[19715]: I0313 13:04:38.202769 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr"] Mar 13 13:04:38.205624 master-0 kubenswrapper[19715]: I0313 13:04:38.205015 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" Mar 13 13:04:38.224623 master-0 kubenswrapper[19715]: I0313 13:04:38.224496 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr"] Mar 13 13:04:38.245596 master-0 kubenswrapper[19715]: I0313 13:04:38.245479 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb"] Mar 13 13:04:38.265152 master-0 kubenswrapper[19715]: I0313 13:04:38.260002 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.265152 master-0 kubenswrapper[19715]: I0313 13:04:38.263802 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 13:04:38.305766 master-0 kubenswrapper[19715]: I0313 13:04:38.298726 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-qmgtq"] Mar 13 13:04:38.305766 master-0 kubenswrapper[19715]: I0313 13:04:38.302288 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.323514 master-0 kubenswrapper[19715]: I0313 13:04:38.323476 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb"] Mar 13 13:04:38.358139 master-0 kubenswrapper[19715]: I0313 13:04:38.356495 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-959zd\" (UniqueName: \"kubernetes.io/projected/5a03c104-eb50-4e42-b7df-16466c74cde4-kube-api-access-959zd\") pod \"nmstate-metrics-9b8c8685d-cd9cr\" (UID: \"5a03c104-eb50-4e42-b7df-16466c74cde4\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" Mar 13 13:04:38.358139 master-0 kubenswrapper[19715]: I0313 13:04:38.358056 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqhq5\" (UniqueName: \"kubernetes.io/projected/189f87d2-721f-43b8-902f-a01a5187de82-kube-api-access-lqhq5\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.358139 master-0 kubenswrapper[19715]: I0313 13:04:38.358240 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.447261 master-0 kubenswrapper[19715]: I0313 13:04:38.443039 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg"] Mar 13 13:04:38.447261 master-0 kubenswrapper[19715]: I0313 13:04:38.444696 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.452449 master-0 kubenswrapper[19715]: I0313 13:04:38.450688 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 13:04:38.452449 master-0 kubenswrapper[19715]: I0313 13:04:38.451861 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 13:04:38.461723 master-0 kubenswrapper[19715]: I0313 13:04:38.461493 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg"] Mar 13 13:04:38.461723 master-0 kubenswrapper[19715]: I0313 13:04:38.461591 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.461939 master-0 kubenswrapper[19715]: I0313 13:04:38.461813 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkqkj\" (UniqueName: \"kubernetes.io/projected/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-kube-api-access-pkqkj\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.461939 master-0 kubenswrapper[19715]: I0313 13:04:38.461899 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-nmstate-lock\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.462082 master-0 kubenswrapper[19715]: E0313 13:04:38.461957 19715 secret.go:189] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Mar 13 13:04:38.462137 master-0 kubenswrapper[19715]: I0313 13:04:38.462011 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-dbus-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.462205 master-0 kubenswrapper[19715]: E0313 13:04:38.462155 19715 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair podName:189f87d2-721f-43b8-902f-a01a5187de82 nodeName:}" failed. No retries permitted until 2026-03-13 13:04:38.962114256 +0000 UTC m=+905.528787193 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair") pod "nmstate-webhook-5f558f5558-2k9jb" (UID: "189f87d2-721f-43b8-902f-a01a5187de82") : secret "openshift-nmstate-webhook" not found Mar 13 13:04:38.462266 master-0 kubenswrapper[19715]: I0313 13:04:38.462213 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-959zd\" (UniqueName: \"kubernetes.io/projected/5a03c104-eb50-4e42-b7df-16466c74cde4-kube-api-access-959zd\") pod \"nmstate-metrics-9b8c8685d-cd9cr\" (UID: \"5a03c104-eb50-4e42-b7df-16466c74cde4\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" Mar 13 13:04:38.462328 master-0 kubenswrapper[19715]: I0313 13:04:38.462283 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-ovs-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.463449 master-0 kubenswrapper[19715]: I0313 13:04:38.463413 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqhq5\" (UniqueName: \"kubernetes.io/projected/189f87d2-721f-43b8-902f-a01a5187de82-kube-api-access-lqhq5\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.508884 master-0 kubenswrapper[19715]: I0313 13:04:38.508816 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-959zd\" (UniqueName: \"kubernetes.io/projected/5a03c104-eb50-4e42-b7df-16466c74cde4-kube-api-access-959zd\") pod \"nmstate-metrics-9b8c8685d-cd9cr\" (UID: \"5a03c104-eb50-4e42-b7df-16466c74cde4\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" Mar 13 13:04:38.522440 master-0 kubenswrapper[19715]: I0313 13:04:38.522359 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqhq5\" (UniqueName: \"kubernetes.io/projected/189f87d2-721f-43b8-902f-a01a5187de82-kube-api-access-lqhq5\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.564177 master-0 kubenswrapper[19715]: I0313 13:04:38.563365 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" Mar 13 13:04:38.567504 master-0 kubenswrapper[19715]: I0313 13:04:38.567041 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkqkj\" (UniqueName: \"kubernetes.io/projected/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-kube-api-access-pkqkj\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.567504 master-0 kubenswrapper[19715]: I0313 13:04:38.567286 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-nmstate-lock\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.567504 master-0 kubenswrapper[19715]: I0313 13:04:38.567349 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-dbus-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.567504 master-0 kubenswrapper[19715]: I0313 13:04:38.567432 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a704337a-47e8-4f3e-a4c1-a3e147a67125-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.567504 master-0 kubenswrapper[19715]: I0313 13:04:38.567484 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a704337a-47e8-4f3e-a4c1-a3e147a67125-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.567870 master-0 kubenswrapper[19715]: I0313 13:04:38.567756 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-dbus-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.567870 master-0 kubenswrapper[19715]: I0313 13:04:38.567796 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-ovs-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.567870 master-0 kubenswrapper[19715]: I0313 13:04:38.567846 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-nmstate-lock\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.573041 master-0 kubenswrapper[19715]: I0313 13:04:38.567882 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b29f5\" (UniqueName: \"kubernetes.io/projected/a704337a-47e8-4f3e-a4c1-a3e147a67125-kube-api-access-b29f5\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.573041 master-0 kubenswrapper[19715]: I0313 13:04:38.567922 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-ovs-socket\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.599992 master-0 kubenswrapper[19715]: I0313 13:04:38.599921 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkqkj\" (UniqueName: \"kubernetes.io/projected/2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad-kube-api-access-pkqkj\") pod \"nmstate-handler-qmgtq\" (UID: \"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad\") " pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.627887 master-0 kubenswrapper[19715]: I0313 13:04:38.627816 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:38.670692 master-0 kubenswrapper[19715]: I0313 13:04:38.669947 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a704337a-47e8-4f3e-a4c1-a3e147a67125-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.670692 master-0 kubenswrapper[19715]: I0313 13:04:38.670032 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a704337a-47e8-4f3e-a4c1-a3e147a67125-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.670692 master-0 kubenswrapper[19715]: I0313 13:04:38.670099 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b29f5\" (UniqueName: \"kubernetes.io/projected/a704337a-47e8-4f3e-a4c1-a3e147a67125-kube-api-access-b29f5\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.672956 master-0 kubenswrapper[19715]: I0313 13:04:38.672912 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a704337a-47e8-4f3e-a4c1-a3e147a67125-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.703386 master-0 kubenswrapper[19715]: I0313 13:04:38.703341 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a704337a-47e8-4f3e-a4c1-a3e147a67125-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.719994 master-0 kubenswrapper[19715]: I0313 13:04:38.719784 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b29f5\" (UniqueName: \"kubernetes.io/projected/a704337a-47e8-4f3e-a4c1-a3e147a67125-kube-api-access-b29f5\") pod \"nmstate-console-plugin-86f58fcf4-s8ztg\" (UID: \"a704337a-47e8-4f3e-a4c1-a3e147a67125\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.754276 master-0 kubenswrapper[19715]: I0313 13:04:38.752098 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-689d5f465d-xhncs"] Mar 13 13:04:38.754744 master-0 kubenswrapper[19715]: I0313 13:04:38.754712 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.755117 master-0 kubenswrapper[19715]: I0313 13:04:38.755043 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qmgtq" event={"ID":"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad","Type":"ContainerStarted","Data":"a45a029ac563254811c3011ff98f38ab71a5f63938b8b5bb87f2abf50a3fe3a3"} Mar 13 13:04:38.766344 master-0 kubenswrapper[19715]: I0313 13:04:38.763769 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-689d5f465d-xhncs"] Mar 13 13:04:38.783935 master-0 kubenswrapper[19715]: I0313 13:04:38.783132 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" Mar 13 13:04:38.881851 master-0 kubenswrapper[19715]: I0313 13:04:38.881784 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfsd4\" (UniqueName: \"kubernetes.io/projected/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-kube-api-access-bfsd4\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882299 master-0 kubenswrapper[19715]: I0313 13:04:38.882279 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882408 master-0 kubenswrapper[19715]: I0313 13:04:38.882394 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-trusted-ca-bundle\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882552 master-0 kubenswrapper[19715]: I0313 13:04:38.882537 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-service-ca\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882686 master-0 kubenswrapper[19715]: I0313 13:04:38.882672 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882771 master-0 kubenswrapper[19715]: I0313 13:04:38.882756 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-oauth-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.882852 master-0 kubenswrapper[19715]: I0313 13:04:38.882839 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-oauth-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984051 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfsd4\" (UniqueName: \"kubernetes.io/projected/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-kube-api-access-bfsd4\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984193 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984251 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-trusted-ca-bundle\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984305 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984381 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-service-ca\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984425 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-oauth-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984452 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.984478 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-oauth-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.985612 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-oauth-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.987959 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-trusted-ca-bundle\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.988758 master-0 kubenswrapper[19715]: I0313 13:04:38.988726 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-service-ca\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:38.989455 master-0 kubenswrapper[19715]: I0313 13:04:38.989426 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:39.014619 master-0 kubenswrapper[19715]: I0313 13:04:38.996310 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-oauth-config\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:39.029621 master-0 kubenswrapper[19715]: I0313 13:04:39.018169 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-console-serving-cert\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:39.030890 master-0 kubenswrapper[19715]: I0313 13:04:39.030854 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/189f87d2-721f-43b8-902f-a01a5187de82-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-2k9jb\" (UID: \"189f87d2-721f-43b8-902f-a01a5187de82\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:39.066620 master-0 kubenswrapper[19715]: I0313 13:04:39.056382 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfsd4\" (UniqueName: \"kubernetes.io/projected/44b38341-ee2e-4fc0-ae38-8ad3aadb33e2-kube-api-access-bfsd4\") pod \"console-689d5f465d-xhncs\" (UID: \"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2\") " pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:39.106617 master-0 kubenswrapper[19715]: I0313 13:04:39.105788 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:39.197864 master-0 kubenswrapper[19715]: I0313 13:04:39.197803 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:39.369129 master-0 kubenswrapper[19715]: I0313 13:04:39.368978 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr"] Mar 13 13:04:39.566016 master-0 kubenswrapper[19715]: I0313 13:04:39.564693 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg"] Mar 13 13:04:39.754057 master-0 kubenswrapper[19715]: I0313 13:04:39.753959 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-689d5f465d-xhncs"] Mar 13 13:04:39.777269 master-0 kubenswrapper[19715]: I0313 13:04:39.777106 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" event={"ID":"5a03c104-eb50-4e42-b7df-16466c74cde4","Type":"ContainerStarted","Data":"56de2e2d0e636d3c943bd6e36885a92add9c67f13fa9bc6dc26962c60bd4246f"} Mar 13 13:04:39.855230 master-0 kubenswrapper[19715]: I0313 13:04:39.855143 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb"] Mar 13 13:04:40.968821 master-0 kubenswrapper[19715]: W0313 13:04:40.968710 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda704337a_47e8_4f3e_a4c1_a3e147a67125.slice/crio-e1b1f65e2aa880d5dbe15c81da7fdc94f159a83208176bd10b1c4267556bc94f WatchSource:0}: Error finding container e1b1f65e2aa880d5dbe15c81da7fdc94f159a83208176bd10b1c4267556bc94f: Status 404 returned error can't find the container with id e1b1f65e2aa880d5dbe15c81da7fdc94f159a83208176bd10b1c4267556bc94f Mar 13 13:04:40.978389 master-0 kubenswrapper[19715]: W0313 13:04:40.978315 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44b38341_ee2e_4fc0_ae38_8ad3aadb33e2.slice/crio-afc008b50441d5d9d2027215db2912b8ec47a28997de4337c6e5d8ac011e401e WatchSource:0}: Error finding container afc008b50441d5d9d2027215db2912b8ec47a28997de4337c6e5d8ac011e401e: Status 404 returned error can't find the container with id afc008b50441d5d9d2027215db2912b8ec47a28997de4337c6e5d8ac011e401e Mar 13 13:04:40.985042 master-0 kubenswrapper[19715]: W0313 13:04:40.984741 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod189f87d2_721f_43b8_902f_a01a5187de82.slice/crio-90d09d3a0db54249ca05628b13b5d0c491486c3c2867f7b2ae1444aeac56d50d WatchSource:0}: Error finding container 90d09d3a0db54249ca05628b13b5d0c491486c3c2867f7b2ae1444aeac56d50d: Status 404 returned error can't find the container with id 90d09d3a0db54249ca05628b13b5d0c491486c3c2867f7b2ae1444aeac56d50d Mar 13 13:04:41.007344 master-0 kubenswrapper[19715]: I0313 13:04:41.007214 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" event={"ID":"a704337a-47e8-4f3e-a4c1-a3e147a67125","Type":"ContainerStarted","Data":"e1b1f65e2aa880d5dbe15c81da7fdc94f159a83208176bd10b1c4267556bc94f"} Mar 13 13:04:42.024621 master-0 kubenswrapper[19715]: I0313 13:04:42.024514 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-689d5f465d-xhncs" event={"ID":"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2","Type":"ContainerStarted","Data":"65083ba4dfa188fef8429c05e3c45be662c1d5a7bcb945db8e98236422c59d39"} Mar 13 13:04:42.024621 master-0 kubenswrapper[19715]: I0313 13:04:42.024621 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-689d5f465d-xhncs" event={"ID":"44b38341-ee2e-4fc0-ae38-8ad3aadb33e2","Type":"ContainerStarted","Data":"afc008b50441d5d9d2027215db2912b8ec47a28997de4337c6e5d8ac011e401e"} Mar 13 13:04:42.032477 master-0 kubenswrapper[19715]: I0313 13:04:42.032406 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" event={"ID":"189f87d2-721f-43b8-902f-a01a5187de82","Type":"ContainerStarted","Data":"90d09d3a0db54249ca05628b13b5d0c491486c3c2867f7b2ae1444aeac56d50d"} Mar 13 13:04:42.038452 master-0 kubenswrapper[19715]: I0313 13:04:42.038390 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-gxqx5" event={"ID":"5b3d5495-d012-46ed-9ccc-96ce46655060","Type":"ContainerStarted","Data":"b4332acabc3fe6c2295b645fb5dc53ac5cec0aa6e47c856bb78dca2e2e301577"} Mar 13 13:04:42.039253 master-0 kubenswrapper[19715]: I0313 13:04:42.039190 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:42.050456 master-0 kubenswrapper[19715]: I0313 13:04:42.050359 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-zlvcv" event={"ID":"ae80375f-bbf8-4030-9cd6-f628f080116f","Type":"ContainerStarted","Data":"1b05417ad94a5b3a66aaf0e59c467cbb6784f097ed490a53f41f5ffe33f306fc"} Mar 13 13:04:42.050915 master-0 kubenswrapper[19715]: I0313 13:04:42.050883 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-zlvcv" Mar 13 13:04:42.074473 master-0 kubenswrapper[19715]: I0313 13:04:42.072064 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-689d5f465d-xhncs" podStartSLOduration=4.071989689 podStartE2EDuration="4.071989689s" podCreationTimestamp="2026-03-13 13:04:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:04:42.060749707 +0000 UTC m=+908.627422494" watchObservedRunningTime="2026-03-13 13:04:42.071989689 +0000 UTC m=+908.638662446" Mar 13 13:04:42.093428 master-0 kubenswrapper[19715]: I0313 13:04:42.093278 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-zlvcv" podStartSLOduration=4.215714401 podStartE2EDuration="8.093238745s" podCreationTimestamp="2026-03-13 13:04:34 +0000 UTC" firstStartedPulling="2026-03-13 13:04:37.231282816 +0000 UTC m=+903.797955573" lastFinishedPulling="2026-03-13 13:04:41.10880716 +0000 UTC m=+907.675479917" observedRunningTime="2026-03-13 13:04:42.085530234 +0000 UTC m=+908.652203021" watchObservedRunningTime="2026-03-13 13:04:42.093238745 +0000 UTC m=+908.659911502" Mar 13 13:04:42.114963 master-0 kubenswrapper[19715]: I0313 13:04:42.114810 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-gxqx5" podStartSLOduration=3.170446287 podStartE2EDuration="8.11477148s" podCreationTimestamp="2026-03-13 13:04:34 +0000 UTC" firstStartedPulling="2026-03-13 13:04:36.157538369 +0000 UTC m=+902.724211126" lastFinishedPulling="2026-03-13 13:04:41.101863562 +0000 UTC m=+907.668536319" observedRunningTime="2026-03-13 13:04:42.113216962 +0000 UTC m=+908.679889729" watchObservedRunningTime="2026-03-13 13:04:42.11477148 +0000 UTC m=+908.681444247" Mar 13 13:04:48.159752 master-0 kubenswrapper[19715]: I0313 13:04:48.159510 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" event={"ID":"5a03c104-eb50-4e42-b7df-16466c74cde4","Type":"ContainerStarted","Data":"b8ab617c3471eaac79cc8488948cc7f25743f8dadc032ef30ca293a93695e8a1"} Mar 13 13:04:48.159752 master-0 kubenswrapper[19715]: I0313 13:04:48.159634 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" event={"ID":"5a03c104-eb50-4e42-b7df-16466c74cde4","Type":"ContainerStarted","Data":"e1bed9518344853c705cea27ba6461d47f0f4ca890580d098227e31458c37839"} Mar 13 13:04:48.173609 master-0 kubenswrapper[19715]: I0313 13:04:48.173530 19715 generic.go:334] "Generic (PLEG): container finished" podID="443d9a8a-7c66-4a0e-8d34-5307f6f1ef13" containerID="27232b121f9335e7b5116d4ff1aadc8bbd0ac8f16d380ecb05043599a0fa1607" exitCode=0 Mar 13 13:04:48.173984 master-0 kubenswrapper[19715]: I0313 13:04:48.173636 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerDied","Data":"27232b121f9335e7b5116d4ff1aadc8bbd0ac8f16d380ecb05043599a0fa1607"} Mar 13 13:04:48.176906 master-0 kubenswrapper[19715]: I0313 13:04:48.176552 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" event={"ID":"4779057d-1e1c-434d-b197-5401a1bec1e8","Type":"ContainerStarted","Data":"559fe869172ff9ad3ec551e5b541f84d72756363722dd4cdad23858456094899"} Mar 13 13:04:48.177552 master-0 kubenswrapper[19715]: I0313 13:04:48.177495 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:04:48.191498 master-0 kubenswrapper[19715]: I0313 13:04:48.191356 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-cd9cr" podStartSLOduration=2.335941009 podStartE2EDuration="10.191323123s" podCreationTimestamp="2026-03-13 13:04:38 +0000 UTC" firstStartedPulling="2026-03-13 13:04:39.401707196 +0000 UTC m=+905.968379953" lastFinishedPulling="2026-03-13 13:04:47.25708931 +0000 UTC m=+913.823762067" observedRunningTime="2026-03-13 13:04:48.183702693 +0000 UTC m=+914.750375470" watchObservedRunningTime="2026-03-13 13:04:48.191323123 +0000 UTC m=+914.757995880" Mar 13 13:04:48.192985 master-0 kubenswrapper[19715]: I0313 13:04:48.192880 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" event={"ID":"189f87d2-721f-43b8-902f-a01a5187de82","Type":"ContainerStarted","Data":"a412098ae35986e81aaff3c7138a57cfc5cb284ac5c9e69f9ff4b5efbb23f28b"} Mar 13 13:04:48.193141 master-0 kubenswrapper[19715]: I0313 13:04:48.193013 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:04:48.199755 master-0 kubenswrapper[19715]: I0313 13:04:48.199673 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-qmgtq" event={"ID":"2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad","Type":"ContainerStarted","Data":"f9ee64040a46c788860b1d99d021e0ca3456a8ca6c0d8b1612621a91646c0080"} Mar 13 13:04:48.200253 master-0 kubenswrapper[19715]: I0313 13:04:48.200068 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:48.280031 master-0 kubenswrapper[19715]: I0313 13:04:48.277919 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" podStartSLOduration=2.747175827 podStartE2EDuration="14.277896017s" podCreationTimestamp="2026-03-13 13:04:34 +0000 UTC" firstStartedPulling="2026-03-13 13:04:35.724049338 +0000 UTC m=+902.290722095" lastFinishedPulling="2026-03-13 13:04:47.254769528 +0000 UTC m=+913.821442285" observedRunningTime="2026-03-13 13:04:48.275336737 +0000 UTC m=+914.842009504" watchObservedRunningTime="2026-03-13 13:04:48.277896017 +0000 UTC m=+914.844568764" Mar 13 13:04:48.318764 master-0 kubenswrapper[19715]: I0313 13:04:48.315648 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" podStartSLOduration=4.126756185 podStartE2EDuration="10.315569607s" podCreationTimestamp="2026-03-13 13:04:38 +0000 UTC" firstStartedPulling="2026-03-13 13:04:41.072979046 +0000 UTC m=+907.639651803" lastFinishedPulling="2026-03-13 13:04:47.261792468 +0000 UTC m=+913.828465225" observedRunningTime="2026-03-13 13:04:48.30704406 +0000 UTC m=+914.873716837" watchObservedRunningTime="2026-03-13 13:04:48.315569607 +0000 UTC m=+914.882242364" Mar 13 13:04:48.347342 master-0 kubenswrapper[19715]: I0313 13:04:48.347178 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-qmgtq" podStartSLOduration=1.797503486 podStartE2EDuration="10.347130897s" podCreationTimestamp="2026-03-13 13:04:38 +0000 UTC" firstStartedPulling="2026-03-13 13:04:38.704758784 +0000 UTC m=+905.271431541" lastFinishedPulling="2026-03-13 13:04:47.254386195 +0000 UTC m=+913.821058952" observedRunningTime="2026-03-13 13:04:48.33286975 +0000 UTC m=+914.899542517" watchObservedRunningTime="2026-03-13 13:04:48.347130897 +0000 UTC m=+914.913803654" Mar 13 13:04:49.107270 master-0 kubenswrapper[19715]: I0313 13:04:49.106931 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:49.107270 master-0 kubenswrapper[19715]: I0313 13:04:49.107093 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:49.113773 master-0 kubenswrapper[19715]: I0313 13:04:49.113721 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:49.221245 master-0 kubenswrapper[19715]: I0313 13:04:49.221135 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" event={"ID":"a704337a-47e8-4f3e-a4c1-a3e147a67125","Type":"ContainerStarted","Data":"9c9cfaf918862d06e1cddd3fd10530b306a7b1173ffcc03c2d28bffd9767cf18"} Mar 13 13:04:49.229780 master-0 kubenswrapper[19715]: I0313 13:04:49.229688 19715 generic.go:334] "Generic (PLEG): container finished" podID="443d9a8a-7c66-4a0e-8d34-5307f6f1ef13" containerID="1a8ff863efce9ea09046e44311bbbe8601512b7dd6469f4016ca60f0b91dbbbd" exitCode=0 Mar 13 13:04:49.230733 master-0 kubenswrapper[19715]: I0313 13:04:49.230331 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerDied","Data":"1a8ff863efce9ea09046e44311bbbe8601512b7dd6469f4016ca60f0b91dbbbd"} Mar 13 13:04:49.236554 master-0 kubenswrapper[19715]: I0313 13:04:49.236404 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-689d5f465d-xhncs" Mar 13 13:04:49.252674 master-0 kubenswrapper[19715]: I0313 13:04:49.252547 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-s8ztg" podStartSLOduration=3.576206086 podStartE2EDuration="11.252509325s" podCreationTimestamp="2026-03-13 13:04:38 +0000 UTC" firstStartedPulling="2026-03-13 13:04:40.979967781 +0000 UTC m=+907.546640538" lastFinishedPulling="2026-03-13 13:04:48.65627102 +0000 UTC m=+915.222943777" observedRunningTime="2026-03-13 13:04:49.24598929 +0000 UTC m=+915.812662077" watchObservedRunningTime="2026-03-13 13:04:49.252509325 +0000 UTC m=+915.819182082" Mar 13 13:04:49.372204 master-0 kubenswrapper[19715]: I0313 13:04:49.372025 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 13:04:50.246754 master-0 kubenswrapper[19715]: I0313 13:04:50.246666 19715 generic.go:334] "Generic (PLEG): container finished" podID="443d9a8a-7c66-4a0e-8d34-5307f6f1ef13" containerID="1520e46e16941188a61446acdc4c35a50cfbeaaa01e8bc0881e0cffdebd6f5dd" exitCode=0 Mar 13 13:04:50.247466 master-0 kubenswrapper[19715]: I0313 13:04:50.246785 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerDied","Data":"1520e46e16941188a61446acdc4c35a50cfbeaaa01e8bc0881e0cffdebd6f5dd"} Mar 13 13:04:51.276104 master-0 kubenswrapper[19715]: I0313 13:04:51.275973 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"adc04dbb72d5d40fb778e2a50c866af2fe5b4c99ef11734e20e70822e56b8fc2"} Mar 13 13:04:51.276104 master-0 kubenswrapper[19715]: I0313 13:04:51.276102 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"860831f6bfc0f0abfc60b7669f70f0374bf5e8124843329bdaadc56a55193320"} Mar 13 13:04:51.276104 master-0 kubenswrapper[19715]: I0313 13:04:51.276113 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"09929485fd87900a5d9f30fb09c8b897e372d6edc6e2100d4cc9eee415d945f6"} Mar 13 13:04:51.276104 master-0 kubenswrapper[19715]: I0313 13:04:51.276122 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"d505171011381c0f2fd2c2ce4eaa9837661829835804dc5652b515a1797df7b2"} Mar 13 13:04:52.295508 master-0 kubenswrapper[19715]: I0313 13:04:52.295416 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"9f5c97dab955a1399d9f0ac06a36e6df456693e90603708c8e9dd5b4cc08fcf4"} Mar 13 13:04:52.295508 master-0 kubenswrapper[19715]: I0313 13:04:52.295507 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5c4fm" event={"ID":"443d9a8a-7c66-4a0e-8d34-5307f6f1ef13","Type":"ContainerStarted","Data":"2437155fc1124996366c7040fb397329c6cd9aa86ec85ce8cfa0db630a178c3b"} Mar 13 13:04:52.296374 master-0 kubenswrapper[19715]: I0313 13:04:52.295853 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:52.336196 master-0 kubenswrapper[19715]: I0313 13:04:52.336034 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5c4fm" podStartSLOduration=6.46644653 podStartE2EDuration="18.335991813s" podCreationTimestamp="2026-03-13 13:04:34 +0000 UTC" firstStartedPulling="2026-03-13 13:04:35.384825812 +0000 UTC m=+901.951498569" lastFinishedPulling="2026-03-13 13:04:47.254371095 +0000 UTC m=+913.821043852" observedRunningTime="2026-03-13 13:04:52.322893722 +0000 UTC m=+918.889566519" watchObservedRunningTime="2026-03-13 13:04:52.335991813 +0000 UTC m=+918.902664560" Mar 13 13:04:53.656446 master-0 kubenswrapper[19715]: I0313 13:04:53.656008 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-qmgtq" Mar 13 13:04:55.159174 master-0 kubenswrapper[19715]: I0313 13:04:55.159059 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:55.206149 master-0 kubenswrapper[19715]: I0313 13:04:55.206079 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:04:55.454749 master-0 kubenswrapper[19715]: I0313 13:04:55.454497 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-gxqx5" Mar 13 13:04:56.877403 master-0 kubenswrapper[19715]: I0313 13:04:56.877310 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-zlvcv" Mar 13 13:04:59.207862 master-0 kubenswrapper[19715]: I0313 13:04:59.207776 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-2k9jb" Mar 13 13:05:04.005000 master-0 kubenswrapper[19715]: I0313 13:05:04.004855 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-w7dw4"] Mar 13 13:05:04.007285 master-0 kubenswrapper[19715]: I0313 13:05:04.007190 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.016858 master-0 kubenswrapper[19715]: I0313 13:05:04.016764 19715 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 13 13:05:04.024379 master-0 kubenswrapper[19715]: I0313 13:05:04.024274 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-w7dw4"] Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111433 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-device-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111595 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-registration-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111649 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-sys\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111689 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-pod-volumes-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111720 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kcts7\" (UniqueName: \"kubernetes.io/projected/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-kube-api-access-kcts7\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111759 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-file-lock-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111778 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-csi-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.111975 master-0 kubenswrapper[19715]: I0313 13:05:04.111817 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-metrics-cert\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.112556 master-0 kubenswrapper[19715]: I0313 13:05:04.111980 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-node-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.112556 master-0 kubenswrapper[19715]: I0313 13:05:04.112096 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-lvmd-config\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.112556 master-0 kubenswrapper[19715]: I0313 13:05:04.112253 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-run-udev\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214395 master-0 kubenswrapper[19715]: I0313 13:05:04.214319 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-registration-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214395 master-0 kubenswrapper[19715]: I0313 13:05:04.214403 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-sys\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214435 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-pod-volumes-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214462 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kcts7\" (UniqueName: \"kubernetes.io/projected/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-kube-api-access-kcts7\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214484 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-csi-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214503 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-file-lock-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214532 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-metrics-cert\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214600 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-node-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214627 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-lvmd-config\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214672 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-run-udev\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.214840 master-0 kubenswrapper[19715]: I0313 13:05:04.214714 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-device-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215143 master-0 kubenswrapper[19715]: I0313 13:05:04.214862 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-device-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215143 master-0 kubenswrapper[19715]: I0313 13:05:04.215120 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-registration-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215279 master-0 kubenswrapper[19715]: I0313 13:05:04.215155 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-sys\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215279 master-0 kubenswrapper[19715]: I0313 13:05:04.215190 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-pod-volumes-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215681 master-0 kubenswrapper[19715]: I0313 13:05:04.215641 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-run-udev\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.215878 master-0 kubenswrapper[19715]: I0313 13:05:04.215847 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-csi-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.216221 master-0 kubenswrapper[19715]: I0313 13:05:04.216151 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-node-plugin-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.216221 master-0 kubenswrapper[19715]: I0313 13:05:04.216213 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-lvmd-config\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.216700 master-0 kubenswrapper[19715]: I0313 13:05:04.216628 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-file-lock-dir\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.221823 master-0 kubenswrapper[19715]: I0313 13:05:04.221759 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-metrics-cert\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.239855 master-0 kubenswrapper[19715]: I0313 13:05:04.239779 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kcts7\" (UniqueName: \"kubernetes.io/projected/b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb-kube-api-access-kcts7\") pod \"vg-manager-w7dw4\" (UID: \"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb\") " pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.372640 master-0 kubenswrapper[19715]: I0313 13:05:04.372295 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:04.917281 master-0 kubenswrapper[19715]: I0313 13:05:04.917184 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-w7dw4"] Mar 13 13:05:05.162061 master-0 kubenswrapper[19715]: I0313 13:05:05.161878 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5c4fm" Mar 13 13:05:05.183365 master-0 kubenswrapper[19715]: I0313 13:05:05.183300 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-9qbxb" Mar 13 13:05:05.480389 master-0 kubenswrapper[19715]: I0313 13:05:05.480195 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-w7dw4" event={"ID":"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb","Type":"ContainerStarted","Data":"b58782de8b0bdd76e4c429eb5895fe21c65c5ca268cda121a0e346abe14b3a0f"} Mar 13 13:05:05.480389 master-0 kubenswrapper[19715]: I0313 13:05:05.480291 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-w7dw4" event={"ID":"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb","Type":"ContainerStarted","Data":"6e2113cb6c228151b101ed3215c56b3c2cacede97e49a4761438ce322a1c022a"} Mar 13 13:05:05.514333 master-0 kubenswrapper[19715]: I0313 13:05:05.514174 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-w7dw4" podStartSLOduration=2.514132375 podStartE2EDuration="2.514132375s" podCreationTimestamp="2026-03-13 13:05:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:05:05.506425884 +0000 UTC m=+932.073098651" watchObservedRunningTime="2026-03-13 13:05:05.514132375 +0000 UTC m=+932.080805132" Mar 13 13:05:07.510442 master-0 kubenswrapper[19715]: I0313 13:05:07.510362 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-w7dw4_b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb/vg-manager/0.log" Mar 13 13:05:07.511328 master-0 kubenswrapper[19715]: I0313 13:05:07.510468 19715 generic.go:334] "Generic (PLEG): container finished" podID="b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb" containerID="b58782de8b0bdd76e4c429eb5895fe21c65c5ca268cda121a0e346abe14b3a0f" exitCode=1 Mar 13 13:05:07.511328 master-0 kubenswrapper[19715]: I0313 13:05:07.510614 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-w7dw4" event={"ID":"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb","Type":"ContainerDied","Data":"b58782de8b0bdd76e4c429eb5895fe21c65c5ca268cda121a0e346abe14b3a0f"} Mar 13 13:05:07.511678 master-0 kubenswrapper[19715]: I0313 13:05:07.511566 19715 scope.go:117] "RemoveContainer" containerID="b58782de8b0bdd76e4c429eb5895fe21c65c5ca268cda121a0e346abe14b3a0f" Mar 13 13:05:07.882139 master-0 kubenswrapper[19715]: I0313 13:05:07.880971 19715 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 13 13:05:08.059470 master-0 kubenswrapper[19715]: I0313 13:05:08.059070 19715 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-13T13:05:07.881047667Z","Handler":null,"Name":""} Mar 13 13:05:08.064711 master-0 kubenswrapper[19715]: I0313 13:05:08.064129 19715 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 13 13:05:08.064711 master-0 kubenswrapper[19715]: I0313 13:05:08.064239 19715 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 13 13:05:08.542895 master-0 kubenswrapper[19715]: I0313 13:05:08.542790 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-w7dw4_b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb/vg-manager/0.log" Mar 13 13:05:08.544026 master-0 kubenswrapper[19715]: I0313 13:05:08.542979 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-w7dw4" event={"ID":"b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb","Type":"ContainerStarted","Data":"09cf0cd2bbe32f997f7325cfb311c4f6be5fbfe274af30f87e76b5bac212c23a"} Mar 13 13:05:11.043699 master-0 kubenswrapper[19715]: I0313 13:05:11.043559 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-rr8h7"] Mar 13 13:05:11.045265 master-0 kubenswrapper[19715]: I0313 13:05:11.045210 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:11.051278 master-0 kubenswrapper[19715]: I0313 13:05:11.051196 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 13:05:11.052706 master-0 kubenswrapper[19715]: I0313 13:05:11.052658 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 13:05:11.070871 master-0 kubenswrapper[19715]: I0313 13:05:11.067424 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rr8h7"] Mar 13 13:05:11.131743 master-0 kubenswrapper[19715]: I0313 13:05:11.131530 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmd5d\" (UniqueName: \"kubernetes.io/projected/285a7f92-85ad-4a45-89a9-c0e6b940f766-kube-api-access-nmd5d\") pod \"openstack-operator-index-rr8h7\" (UID: \"285a7f92-85ad-4a45-89a9-c0e6b940f766\") " pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:11.234262 master-0 kubenswrapper[19715]: I0313 13:05:11.234163 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nmd5d\" (UniqueName: \"kubernetes.io/projected/285a7f92-85ad-4a45-89a9-c0e6b940f766-kube-api-access-nmd5d\") pod \"openstack-operator-index-rr8h7\" (UID: \"285a7f92-85ad-4a45-89a9-c0e6b940f766\") " pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:11.255604 master-0 kubenswrapper[19715]: I0313 13:05:11.255522 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nmd5d\" (UniqueName: \"kubernetes.io/projected/285a7f92-85ad-4a45-89a9-c0e6b940f766-kube-api-access-nmd5d\") pod \"openstack-operator-index-rr8h7\" (UID: \"285a7f92-85ad-4a45-89a9-c0e6b940f766\") " pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:11.404111 master-0 kubenswrapper[19715]: I0313 13:05:11.387071 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:12.253886 master-0 kubenswrapper[19715]: I0313 13:05:12.253510 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-rr8h7"] Mar 13 13:05:12.263875 master-0 kubenswrapper[19715]: W0313 13:05:12.263796 19715 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod285a7f92_85ad_4a45_89a9_c0e6b940f766.slice/crio-fa73bd87c052504485861b3aaf014778529d5fc7bdcc83a1faa8cfb480822484 WatchSource:0}: Error finding container fa73bd87c052504485861b3aaf014778529d5fc7bdcc83a1faa8cfb480822484: Status 404 returned error can't find the container with id fa73bd87c052504485861b3aaf014778529d5fc7bdcc83a1faa8cfb480822484 Mar 13 13:05:12.587024 master-0 kubenswrapper[19715]: I0313 13:05:12.586792 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rr8h7" event={"ID":"285a7f92-85ad-4a45-89a9-c0e6b940f766","Type":"ContainerStarted","Data":"fa73bd87c052504485861b3aaf014778529d5fc7bdcc83a1faa8cfb480822484"} Mar 13 13:05:14.373567 master-0 kubenswrapper[19715]: I0313 13:05:14.373480 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:14.381742 master-0 kubenswrapper[19715]: I0313 13:05:14.380153 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:14.435687 master-0 kubenswrapper[19715]: I0313 13:05:14.435586 19715 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-79d876f4d6-kqmws" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" containerID="cri-o://f68e53724e2966b52f406822088d5de5ec83b1cc4ea10d74c2419f8367d009e2" gracePeriod=15 Mar 13 13:05:14.611918 master-0 kubenswrapper[19715]: I0313 13:05:14.611692 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-rr8h7" event={"ID":"285a7f92-85ad-4a45-89a9-c0e6b940f766","Type":"ContainerStarted","Data":"6ddb652936cb84955262afdeee704beec9254a572f5cd10fd09eab6b4262381e"} Mar 13 13:05:14.617376 master-0 kubenswrapper[19715]: I0313 13:05:14.617322 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79d876f4d6-kqmws_705af152-5524-4500-b326-80cc4ee76bee/console/0.log" Mar 13 13:05:14.617527 master-0 kubenswrapper[19715]: I0313 13:05:14.617386 19715 generic.go:334] "Generic (PLEG): container finished" podID="705af152-5524-4500-b326-80cc4ee76bee" containerID="f68e53724e2966b52f406822088d5de5ec83b1cc4ea10d74c2419f8367d009e2" exitCode=2 Mar 13 13:05:14.617570 master-0 kubenswrapper[19715]: I0313 13:05:14.617538 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d876f4d6-kqmws" event={"ID":"705af152-5524-4500-b326-80cc4ee76bee","Type":"ContainerDied","Data":"f68e53724e2966b52f406822088d5de5ec83b1cc4ea10d74c2419f8367d009e2"} Mar 13 13:05:14.617983 master-0 kubenswrapper[19715]: I0313 13:05:14.617936 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:14.619241 master-0 kubenswrapper[19715]: I0313 13:05:14.619097 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-w7dw4" Mar 13 13:05:14.635424 master-0 kubenswrapper[19715]: I0313 13:05:14.635314 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-rr8h7" podStartSLOduration=3.440987482 podStartE2EDuration="4.635269676s" podCreationTimestamp="2026-03-13 13:05:10 +0000 UTC" firstStartedPulling="2026-03-13 13:05:12.269569313 +0000 UTC m=+938.836242110" lastFinishedPulling="2026-03-13 13:05:13.463851547 +0000 UTC m=+940.030524304" observedRunningTime="2026-03-13 13:05:14.633360036 +0000 UTC m=+941.200032793" watchObservedRunningTime="2026-03-13 13:05:14.635269676 +0000 UTC m=+941.201942433" Mar 13 13:05:14.965453 master-0 kubenswrapper[19715]: I0313 13:05:14.965369 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79d876f4d6-kqmws_705af152-5524-4500-b326-80cc4ee76bee/console/0.log" Mar 13 13:05:14.965792 master-0 kubenswrapper[19715]: I0313 13:05:14.965497 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 13:05:15.111019 master-0 kubenswrapper[19715]: I0313 13:05:15.110101 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.111460 master-0 kubenswrapper[19715]: I0313 13:05:15.111114 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.111460 master-0 kubenswrapper[19715]: I0313 13:05:15.111202 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.111460 master-0 kubenswrapper[19715]: I0313 13:05:15.111428 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.111666 master-0 kubenswrapper[19715]: I0313 13:05:15.111551 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.111721 master-0 kubenswrapper[19715]: I0313 13:05:15.111687 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pswgb\" (UniqueName: \"kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.112029 master-0 kubenswrapper[19715]: I0313 13:05:15.111993 19715 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle\") pod \"705af152-5524-4500-b326-80cc4ee76bee\" (UID: \"705af152-5524-4500-b326-80cc4ee76bee\") " Mar 13 13:05:15.112755 master-0 kubenswrapper[19715]: I0313 13:05:15.111948 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:05:15.112816 master-0 kubenswrapper[19715]: I0313 13:05:15.112415 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config" (OuterVolumeSpecName: "console-config") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:05:15.113314 master-0 kubenswrapper[19715]: I0313 13:05:15.113277 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca" (OuterVolumeSpecName: "service-ca") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:05:15.113875 master-0 kubenswrapper[19715]: I0313 13:05:15.113818 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 13:05:15.114084 master-0 kubenswrapper[19715]: I0313 13:05:15.114047 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:05:15.116875 master-0 kubenswrapper[19715]: I0313 13:05:15.116833 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 13:05:15.117118 master-0 kubenswrapper[19715]: I0313 13:05:15.117087 19715 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb" (OuterVolumeSpecName: "kube-api-access-pswgb") pod "705af152-5524-4500-b326-80cc4ee76bee" (UID: "705af152-5524-4500-b326-80cc4ee76bee"). InnerVolumeSpecName "kube-api-access-pswgb". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213860 19715 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213920 19715 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213937 19715 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213954 19715 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pswgb\" (UniqueName: \"kubernetes.io/projected/705af152-5524-4500-b326-80cc4ee76bee-kube-api-access-pswgb\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213967 19715 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213980 19715 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/705af152-5524-4500-b326-80cc4ee76bee-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.214281 master-0 kubenswrapper[19715]: I0313 13:05:15.213997 19715 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/705af152-5524-4500-b326-80cc4ee76bee-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 13:05:15.633843 master-0 kubenswrapper[19715]: I0313 13:05:15.633773 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-79d876f4d6-kqmws_705af152-5524-4500-b326-80cc4ee76bee/console/0.log" Mar 13 13:05:15.635253 master-0 kubenswrapper[19715]: I0313 13:05:15.635199 19715 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-79d876f4d6-kqmws" Mar 13 13:05:15.637913 master-0 kubenswrapper[19715]: I0313 13:05:15.637833 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-79d876f4d6-kqmws" event={"ID":"705af152-5524-4500-b326-80cc4ee76bee","Type":"ContainerDied","Data":"bf938663ffed3689f7098ea0a408a753e54981e2ca85c1c01cd91ec7fb9d341f"} Mar 13 13:05:15.638053 master-0 kubenswrapper[19715]: I0313 13:05:15.637939 19715 scope.go:117] "RemoveContainer" containerID="f68e53724e2966b52f406822088d5de5ec83b1cc4ea10d74c2419f8367d009e2" Mar 13 13:05:15.730459 master-0 kubenswrapper[19715]: I0313 13:05:15.730387 19715 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 13:05:15.741830 master-0 kubenswrapper[19715]: I0313 13:05:15.740763 19715 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-79d876f4d6-kqmws"] Mar 13 13:05:17.711857 master-0 kubenswrapper[19715]: I0313 13:05:17.711762 19715 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="705af152-5524-4500-b326-80cc4ee76bee" path="/var/lib/kubelet/pods/705af152-5524-4500-b326-80cc4ee76bee/volumes" Mar 13 13:05:21.389439 master-0 kubenswrapper[19715]: I0313 13:05:21.389342 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:21.389439 master-0 kubenswrapper[19715]: I0313 13:05:21.389444 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:21.436139 master-0 kubenswrapper[19715]: I0313 13:05:21.436049 19715 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:05:21.770357 master-0 kubenswrapper[19715]: I0313 13:05:21.770185 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-rr8h7" Mar 13 13:07:12.750044 master-0 kubenswrapper[19715]: I0313 13:07:12.749916 19715 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/redhat-marketplace-92rsn" podUID="730e1f43-39b7-41de-ac81-270966725477" containerName="registry-server" probeResult="failure" output=< Mar 13 13:07:12.750044 master-0 kubenswrapper[19715]: timeout: failed to connect service ":50051" within 1s Mar 13 13:07:12.750044 master-0 kubenswrapper[19715]: > Mar 13 13:07:12.759267 master-0 kubenswrapper[19715]: I0313 13:07:12.759206 19715 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/redhat-marketplace-92rsn" podUID="730e1f43-39b7-41de-ac81-270966725477" containerName="registry-server" probeResult="failure" output=< Mar 13 13:07:12.759267 master-0 kubenswrapper[19715]: timeout: failed to connect service ":50051" within 1s Mar 13 13:07:12.759267 master-0 kubenswrapper[19715]: > Mar 13 13:10:22.258791 master-0 kubenswrapper[19715]: I0313 13:10:22.258608 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cmnhq/must-gather-wdd5m"] Mar 13 13:10:22.259966 master-0 kubenswrapper[19715]: E0313 13:10:22.259123 19715 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" Mar 13 13:10:22.259966 master-0 kubenswrapper[19715]: I0313 13:10:22.259147 19715 state_mem.go:107] "Deleted CPUSet assignment" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" Mar 13 13:10:22.259966 master-0 kubenswrapper[19715]: I0313 13:10:22.259316 19715 memory_manager.go:354] "RemoveStaleState removing state" podUID="705af152-5524-4500-b326-80cc4ee76bee" containerName="console" Mar 13 13:10:22.260671 master-0 kubenswrapper[19715]: I0313 13:10:22.260625 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.263754 master-0 kubenswrapper[19715]: I0313 13:10:22.263666 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cmnhq"/"kube-root-ca.crt" Mar 13 13:10:22.264201 master-0 kubenswrapper[19715]: I0313 13:10:22.264161 19715 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-cmnhq"/"openshift-service-ca.crt" Mar 13 13:10:22.271669 master-0 kubenswrapper[19715]: I0313 13:10:22.271603 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cmnhq/must-gather-dqn87"] Mar 13 13:10:22.274641 master-0 kubenswrapper[19715]: I0313 13:10:22.274566 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.293480 master-0 kubenswrapper[19715]: I0313 13:10:22.293399 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/must-gather-wdd5m"] Mar 13 13:10:22.318659 master-0 kubenswrapper[19715]: I0313 13:10:22.318555 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/must-gather-dqn87"] Mar 13 13:10:22.367300 master-0 kubenswrapper[19715]: I0313 13:10:22.367216 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-must-gather-output\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.367667 master-0 kubenswrapper[19715]: I0313 13:10:22.367327 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwtft\" (UniqueName: \"kubernetes.io/projected/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-kube-api-access-vwtft\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.367667 master-0 kubenswrapper[19715]: I0313 13:10:22.367390 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92cbq\" (UniqueName: \"kubernetes.io/projected/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-kube-api-access-92cbq\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.367667 master-0 kubenswrapper[19715]: I0313 13:10:22.367498 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-must-gather-output\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.468990 master-0 kubenswrapper[19715]: I0313 13:10:22.468935 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92cbq\" (UniqueName: \"kubernetes.io/projected/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-kube-api-access-92cbq\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.469398 master-0 kubenswrapper[19715]: I0313 13:10:22.469375 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-must-gather-output\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.469672 master-0 kubenswrapper[19715]: I0313 13:10:22.469647 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-must-gather-output\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.469920 master-0 kubenswrapper[19715]: I0313 13:10:22.469896 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwtft\" (UniqueName: \"kubernetes.io/projected/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-kube-api-access-vwtft\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.470667 master-0 kubenswrapper[19715]: I0313 13:10:22.470335 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-must-gather-output\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.470667 master-0 kubenswrapper[19715]: I0313 13:10:22.470435 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-must-gather-output\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.488829 master-0 kubenswrapper[19715]: I0313 13:10:22.488766 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwtft\" (UniqueName: \"kubernetes.io/projected/1e8a2e03-0fcb-4d09-a697-0443e7559bbf-kube-api-access-vwtft\") pod \"must-gather-dqn87\" (UID: \"1e8a2e03-0fcb-4d09-a697-0443e7559bbf\") " pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:22.489900 master-0 kubenswrapper[19715]: I0313 13:10:22.489847 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92cbq\" (UniqueName: \"kubernetes.io/projected/d71c18eb-ea67-40a0-b967-fcc8c406fe9d-kube-api-access-92cbq\") pod \"must-gather-wdd5m\" (UID: \"d71c18eb-ea67-40a0-b967-fcc8c406fe9d\") " pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.584185 master-0 kubenswrapper[19715]: I0313 13:10:22.583861 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" Mar 13 13:10:22.597822 master-0 kubenswrapper[19715]: I0313 13:10:22.597729 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/must-gather-dqn87" Mar 13 13:10:23.356190 master-0 kubenswrapper[19715]: I0313 13:10:23.355979 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/must-gather-dqn87"] Mar 13 13:10:23.363251 master-0 kubenswrapper[19715]: I0313 13:10:23.363176 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/must-gather-wdd5m"] Mar 13 13:10:23.379358 master-0 kubenswrapper[19715]: I0313 13:10:23.379305 19715 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 13:10:23.664553 master-0 kubenswrapper[19715]: I0313 13:10:23.664444 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-dqn87" event={"ID":"1e8a2e03-0fcb-4d09-a697-0443e7559bbf","Type":"ContainerStarted","Data":"f0f47863378b6af9bc3f3682299272347f83cfe445d559e2b11938fc94fb92a3"} Mar 13 13:10:23.665881 master-0 kubenswrapper[19715]: I0313 13:10:23.665807 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" event={"ID":"d71c18eb-ea67-40a0-b967-fcc8c406fe9d","Type":"ContainerStarted","Data":"9712cc286cca44d45463ad91ccd2e0dc827f066017cf5ff3653388b54eeb24ca"} Mar 13 13:10:25.716569 master-0 kubenswrapper[19715]: I0313 13:10:25.716353 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-dqn87" event={"ID":"1e8a2e03-0fcb-4d09-a697-0443e7559bbf","Type":"ContainerStarted","Data":"5a2f39b949a66354def9e7ad281a7ec6e8d71a1b2727280dcf2dbedd5b292300"} Mar 13 13:10:25.716569 master-0 kubenswrapper[19715]: I0313 13:10:25.716418 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-dqn87" event={"ID":"1e8a2e03-0fcb-4d09-a697-0443e7559bbf","Type":"ContainerStarted","Data":"6b305a9407799bc55b184bfd3b0bd32b5d0d646111715afbea5e67564a8a178e"} Mar 13 13:10:25.746987 master-0 kubenswrapper[19715]: I0313 13:10:25.746891 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cmnhq/must-gather-dqn87" podStartSLOduration=2.6090412990000003 podStartE2EDuration="3.746866792s" podCreationTimestamp="2026-03-13 13:10:22 +0000 UTC" firstStartedPulling="2026-03-13 13:10:23.382809628 +0000 UTC m=+1249.949482375" lastFinishedPulling="2026-03-13 13:10:24.520635111 +0000 UTC m=+1251.087307868" observedRunningTime="2026-03-13 13:10:25.742811503 +0000 UTC m=+1252.309484270" watchObservedRunningTime="2026-03-13 13:10:25.746866792 +0000 UTC m=+1252.313539559" Mar 13 13:10:28.075065 master-0 kubenswrapper[19715]: I0313 13:10:28.074973 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-rkg56_dc1c9136-80e1-4736-8959-cc1e58aee26e/cluster-version-operator/0.log" Mar 13 13:10:31.819714 master-0 kubenswrapper[19715]: I0313 13:10:31.815783 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-s8ztg_a704337a-47e8-4f3e-a4c1-a3e147a67125/nmstate-console-plugin/0.log" Mar 13 13:10:31.828753 master-0 kubenswrapper[19715]: I0313 13:10:31.825900 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/controller/0.log" Mar 13 13:10:31.845425 master-0 kubenswrapper[19715]: I0313 13:10:31.845373 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/kube-rbac-proxy/0.log" Mar 13 13:10:31.864998 master-0 kubenswrapper[19715]: I0313 13:10:31.864924 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qmgtq_2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad/nmstate-handler/0.log" Mar 13 13:10:31.894044 master-0 kubenswrapper[19715]: I0313 13:10:31.893147 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/nmstate-metrics/0.log" Mar 13 13:10:31.910044 master-0 kubenswrapper[19715]: I0313 13:10:31.909869 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/kube-rbac-proxy/0.log" Mar 13 13:10:31.911172 master-0 kubenswrapper[19715]: I0313 13:10:31.911068 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/controller/0.log" Mar 13 13:10:31.943449 master-0 kubenswrapper[19715]: I0313 13:10:31.943252 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-bsppq_42ae4c26-cb33-47a7-b53b-b88f395f06e0/nmstate-operator/0.log" Mar 13 13:10:31.966356 master-0 kubenswrapper[19715]: I0313 13:10:31.966252 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr/0.log" Mar 13 13:10:31.983333 master-0 kubenswrapper[19715]: I0313 13:10:31.980550 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-2k9jb_189f87d2-721f-43b8-902f-a01a5187de82/nmstate-webhook/0.log" Mar 13 13:10:31.989286 master-0 kubenswrapper[19715]: I0313 13:10:31.989225 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/reloader/0.log" Mar 13 13:10:32.015807 master-0 kubenswrapper[19715]: I0313 13:10:32.013357 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr-metrics/0.log" Mar 13 13:10:32.033731 master-0 kubenswrapper[19715]: I0313 13:10:32.032960 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy/0.log" Mar 13 13:10:32.052734 master-0 kubenswrapper[19715]: I0313 13:10:32.052065 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy-frr/0.log" Mar 13 13:10:32.069363 master-0 kubenswrapper[19715]: I0313 13:10:32.069278 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-frr-files/0.log" Mar 13 13:10:32.093903 master-0 kubenswrapper[19715]: I0313 13:10:32.092419 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-reloader/0.log" Mar 13 13:10:32.113180 master-0 kubenswrapper[19715]: I0313 13:10:32.111968 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-metrics/0.log" Mar 13 13:10:32.149609 master-0 kubenswrapper[19715]: I0313 13:10:32.141917 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-9qbxb_4779057d-1e1c-434d-b197-5401a1bec1e8/frr-k8s-webhook-server/0.log" Mar 13 13:10:32.197733 master-0 kubenswrapper[19715]: I0313 13:10:32.189530 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7688d46-wm7m9_7ce08f6e-9720-4e70-bba0-f8a56161dc15/manager/0.log" Mar 13 13:10:32.207074 master-0 kubenswrapper[19715]: I0313 13:10:32.206615 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7568db4689-9tdfv_4f6e1dd7-43c5-4906-b16f-627418cfe501/webhook-server/0.log" Mar 13 13:10:32.315012 master-0 kubenswrapper[19715]: I0313 13:10:32.314937 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/speaker/0.log" Mar 13 13:10:32.339718 master-0 kubenswrapper[19715]: I0313 13:10:32.337052 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/kube-rbac-proxy/0.log" Mar 13 13:10:34.211443 master-0 kubenswrapper[19715]: I0313 13:10:34.209866 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 13 13:10:34.295653 master-0 kubenswrapper[19715]: I0313 13:10:34.295553 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 13 13:10:34.322652 master-0 kubenswrapper[19715]: I0313 13:10:34.320694 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 13 13:10:34.356117 master-0 kubenswrapper[19715]: I0313 13:10:34.356037 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 13 13:10:34.388809 master-0 kubenswrapper[19715]: I0313 13:10:34.387659 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 13 13:10:34.422656 master-0 kubenswrapper[19715]: I0313 13:10:34.422435 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 13 13:10:34.460617 master-0 kubenswrapper[19715]: I0313 13:10:34.454180 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 13 13:10:34.482784 master-0 kubenswrapper[19715]: I0313 13:10:34.481864 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 13 13:10:34.532552 master-0 kubenswrapper[19715]: I0313 13:10:34.532477 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_7028b88a-ef6e-47f7-bbd7-cf798efdded5/installer/0.log" Mar 13 13:10:34.594673 master-0 kubenswrapper[19715]: I0313 13:10:34.592873 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_f2ae954b-a362-4cd1-8e54-c4aedcf30a00/installer/0.log" Mar 13 13:10:35.638224 master-0 kubenswrapper[19715]: I0313 13:10:35.638150 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-7vm6x_2352a350-0a7c-4fcd-ba8f-ee9a4c80b132/assisted-installer-controller/0.log" Mar 13 13:10:35.778436 master-0 kubenswrapper[19715]: I0313 13:10:35.778363 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-5468d7b87-r5hj5_ea4d792f-b0ff-4316-aeed-2dee2c6f1eea/oauth-openshift/0.log" Mar 13 13:10:37.292596 master-0 kubenswrapper[19715]: I0313 13:10:37.292180 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/4.log" Mar 13 13:10:37.412173 master-0 kubenswrapper[19715]: I0313 13:10:37.412101 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-ztmrr_f2a74c2a-8376-4998-bdc6-02a978f1f568/authentication-operator/5.log" Mar 13 13:10:38.230772 master-0 kubenswrapper[19715]: I0313 13:10:38.230689 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-q5h8k_38ba3e49-717e-458d-bb3d-4acbd6d904bf/router/0.log" Mar 13 13:10:38.720700 master-0 kubenswrapper[19715]: I0313 13:10:38.720571 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" event={"ID":"d71c18eb-ea67-40a0-b967-fcc8c406fe9d","Type":"ContainerStarted","Data":"661857bccc53c5160df0b226f3f494d3cc391fbc10747bb883dda6ede4fb9575"} Mar 13 13:10:38.720700 master-0 kubenswrapper[19715]: I0313 13:10:38.720699 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" event={"ID":"d71c18eb-ea67-40a0-b967-fcc8c406fe9d","Type":"ContainerStarted","Data":"44eff572ae175a37f401c1d2cc597dc556e1cdda8f2fb5af9abaafa2e99ce691"} Mar 13 13:10:38.748381 master-0 kubenswrapper[19715]: I0313 13:10:38.748101 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cmnhq/must-gather-wdd5m" podStartSLOduration=2.146558501 podStartE2EDuration="16.748059251s" podCreationTimestamp="2026-03-13 13:10:22 +0000 UTC" firstStartedPulling="2026-03-13 13:10:23.379258475 +0000 UTC m=+1249.945931232" lastFinishedPulling="2026-03-13 13:10:37.980759225 +0000 UTC m=+1264.547431982" observedRunningTime="2026-03-13 13:10:38.74520719 +0000 UTC m=+1265.311879967" watchObservedRunningTime="2026-03-13 13:10:38.748059251 +0000 UTC m=+1265.314732008" Mar 13 13:10:38.960154 master-0 kubenswrapper[19715]: I0313 13:10:38.960065 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6f6d949ddd-p9f8k_0943b2db-9658-4a8d-89da-00779d55db6e/oauth-apiserver/0.log" Mar 13 13:10:38.984739 master-0 kubenswrapper[19715]: I0313 13:10:38.984663 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-6f6d949ddd-p9f8k_0943b2db-9658-4a8d-89da-00779d55db6e/fix-audit-permissions/0.log" Mar 13 13:10:39.757845 master-0 kubenswrapper[19715]: I0313 13:10:39.757764 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-94zs2_6592aa5b-4a50-40f6-80a5-87e3c547018d/kube-rbac-proxy/0.log" Mar 13 13:10:39.828523 master-0 kubenswrapper[19715]: I0313 13:10:39.828437 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-94zs2_6592aa5b-4a50-40f6-80a5-87e3c547018d/cluster-autoscaler-operator/0.log" Mar 13 13:10:39.860836 master-0 kubenswrapper[19715]: I0313 13:10:39.860761 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 13:10:39.863992 master-0 kubenswrapper[19715]: I0313 13:10:39.863922 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/3.log" Mar 13 13:10:39.887203 master-0 kubenswrapper[19715]: I0313 13:10:39.887142 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/baremetal-kube-rbac-proxy/0.log" Mar 13 13:10:39.922013 master-0 kubenswrapper[19715]: I0313 13:10:39.920248 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-d7qrz_74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/control-plane-machine-set-operator/0.log" Mar 13 13:10:39.953830 master-0 kubenswrapper[19715]: I0313 13:10:39.952629 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-zthfh_03758d96-5a20-4cba-92e0-47f5b1a3e558/kube-rbac-proxy/0.log" Mar 13 13:10:39.998750 master-0 kubenswrapper[19715]: I0313 13:10:39.997278 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-zthfh_03758d96-5a20-4cba-92e0-47f5b1a3e558/machine-api-operator/0.log" Mar 13 13:10:40.281694 master-0 kubenswrapper[19715]: I0313 13:10:40.281588 19715 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc"] Mar 13 13:10:40.283528 master-0 kubenswrapper[19715]: I0313 13:10:40.283487 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.302639 master-0 kubenswrapper[19715]: I0313 13:10:40.298593 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc"] Mar 13 13:10:40.376631 master-0 kubenswrapper[19715]: I0313 13:10:40.372639 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-proc\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.376631 master-0 kubenswrapper[19715]: I0313 13:10:40.372747 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htcjr\" (UniqueName: \"kubernetes.io/projected/bd9ee57e-d5ca-4b01-bb31-23105590d298-kube-api-access-htcjr\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.376631 master-0 kubenswrapper[19715]: I0313 13:10:40.372796 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-podres\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.376631 master-0 kubenswrapper[19715]: I0313 13:10:40.372828 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-sys\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.376631 master-0 kubenswrapper[19715]: I0313 13:10:40.372868 19715 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-lib-modules\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474293 master-0 kubenswrapper[19715]: I0313 13:10:40.474216 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-lib-modules\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474679 master-0 kubenswrapper[19715]: I0313 13:10:40.474322 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-proc\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474679 master-0 kubenswrapper[19715]: I0313 13:10:40.474385 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htcjr\" (UniqueName: \"kubernetes.io/projected/bd9ee57e-d5ca-4b01-bb31-23105590d298-kube-api-access-htcjr\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474679 master-0 kubenswrapper[19715]: I0313 13:10:40.474429 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-podres\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474679 master-0 kubenswrapper[19715]: I0313 13:10:40.474459 19715 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-sys\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474679 master-0 kubenswrapper[19715]: I0313 13:10:40.474622 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-sys\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474941 master-0 kubenswrapper[19715]: I0313 13:10:40.474701 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-lib-modules\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.474941 master-0 kubenswrapper[19715]: I0313 13:10:40.474739 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-proc\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.475184 master-0 kubenswrapper[19715]: I0313 13:10:40.475145 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/bd9ee57e-d5ca-4b01-bb31-23105590d298-podres\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.528399 master-0 kubenswrapper[19715]: I0313 13:10:40.528333 19715 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htcjr\" (UniqueName: \"kubernetes.io/projected/bd9ee57e-d5ca-4b01-bb31-23105590d298-kube-api-access-htcjr\") pod \"perf-node-gather-daemonset-ksrsc\" (UID: \"bd9ee57e-d5ca-4b01-bb31-23105590d298\") " pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:40.603522 master-0 kubenswrapper[19715]: I0313 13:10:40.603360 19715 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:41.313776 master-0 kubenswrapper[19715]: I0313 13:10:41.313593 19715 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc"] Mar 13 13:10:41.771611 master-0 kubenswrapper[19715]: I0313 13:10:41.771518 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" event={"ID":"bd9ee57e-d5ca-4b01-bb31-23105590d298","Type":"ContainerStarted","Data":"219cd5b91c21084947a7c0726e1f67ce33a86f26d41cc71f18a64bca8a1a5ebf"} Mar 13 13:10:41.831516 master-0 kubenswrapper[19715]: I0313 13:10:41.831455 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w_5c1c87ba-53c4-4b52-88e2-a3ed2d801393/cluster-cloud-controller-manager/0.log" Mar 13 13:10:41.863082 master-0 kubenswrapper[19715]: I0313 13:10:41.863019 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w_5c1c87ba-53c4-4b52-88e2-a3ed2d801393/config-sync-controllers/0.log" Mar 13 13:10:41.882264 master-0 kubenswrapper[19715]: I0313 13:10:41.882178 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-5bw5w_5c1c87ba-53c4-4b52-88e2-a3ed2d801393/kube-rbac-proxy/0.log" Mar 13 13:10:42.795533 master-0 kubenswrapper[19715]: I0313 13:10:42.795434 19715 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" event={"ID":"bd9ee57e-d5ca-4b01-bb31-23105590d298","Type":"ContainerStarted","Data":"d946f189cd4beb5f53ed1df287b89d1ff345f19060015ca67bbc67026a5789cc"} Mar 13 13:10:42.796707 master-0 kubenswrapper[19715]: I0313 13:10:42.795864 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:42.825913 master-0 kubenswrapper[19715]: I0313 13:10:42.824162 19715 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" podStartSLOduration=2.824134571 podStartE2EDuration="2.824134571s" podCreationTimestamp="2026-03-13 13:10:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 13:10:42.821098125 +0000 UTC m=+1269.387770882" watchObservedRunningTime="2026-03-13 13:10:42.824134571 +0000 UTC m=+1269.390807328" Mar 13 13:10:43.669177 master-0 kubenswrapper[19715]: I0313 13:10:43.669103 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-cb577_31442e1e-3f42-4dba-82d5-08e5f8d29a58/kube-rbac-proxy/0.log" Mar 13 13:10:43.714006 master-0 kubenswrapper[19715]: I0313 13:10:43.713945 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-cb577_31442e1e-3f42-4dba-82d5-08e5f8d29a58/cloud-credential-operator/0.log" Mar 13 13:10:44.850633 master-0 kubenswrapper[19715]: I0313 13:10:44.849717 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/controller/0.log" Mar 13 13:10:44.860536 master-0 kubenswrapper[19715]: I0313 13:10:44.860485 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/kube-rbac-proxy/0.log" Mar 13 13:10:44.888274 master-0 kubenswrapper[19715]: I0313 13:10:44.887922 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/controller/0.log" Mar 13 13:10:44.939156 master-0 kubenswrapper[19715]: I0313 13:10:44.939093 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr/0.log" Mar 13 13:10:44.949675 master-0 kubenswrapper[19715]: I0313 13:10:44.949629 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/reloader/0.log" Mar 13 13:10:44.966816 master-0 kubenswrapper[19715]: I0313 13:10:44.966753 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr-metrics/0.log" Mar 13 13:10:44.989399 master-0 kubenswrapper[19715]: I0313 13:10:44.989317 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy/0.log" Mar 13 13:10:45.007138 master-0 kubenswrapper[19715]: I0313 13:10:45.007070 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy-frr/0.log" Mar 13 13:10:45.018702 master-0 kubenswrapper[19715]: I0313 13:10:45.018657 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-frr-files/0.log" Mar 13 13:10:45.035071 master-0 kubenswrapper[19715]: I0313 13:10:45.035003 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-reloader/0.log" Mar 13 13:10:45.047620 master-0 kubenswrapper[19715]: I0313 13:10:45.047549 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-metrics/0.log" Mar 13 13:10:45.061099 master-0 kubenswrapper[19715]: I0313 13:10:45.061027 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-9qbxb_4779057d-1e1c-434d-b197-5401a1bec1e8/frr-k8s-webhook-server/0.log" Mar 13 13:10:45.093221 master-0 kubenswrapper[19715]: I0313 13:10:45.093163 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7688d46-wm7m9_7ce08f6e-9720-4e70-bba0-f8a56161dc15/manager/0.log" Mar 13 13:10:45.115722 master-0 kubenswrapper[19715]: I0313 13:10:45.113260 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7568db4689-9tdfv_4f6e1dd7-43c5-4906-b16f-627418cfe501/webhook-server/0.log" Mar 13 13:10:45.188162 master-0 kubenswrapper[19715]: I0313 13:10:45.188032 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/speaker/0.log" Mar 13 13:10:45.196454 master-0 kubenswrapper[19715]: I0313 13:10:45.196380 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/kube-rbac-proxy/0.log" Mar 13 13:10:45.495116 master-0 kubenswrapper[19715]: I0313 13:10:45.495029 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/3.log" Mar 13 13:10:45.497855 master-0 kubenswrapper[19715]: I0313 13:10:45.497806 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-config-operator/4.log" Mar 13 13:10:45.511252 master-0 kubenswrapper[19715]: I0313 13:10:45.511200 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-tml9z_edde8919-104a-4f05-8e21-46787f706bed/openshift-api/0.log" Mar 13 13:10:46.275958 master-0 kubenswrapper[19715]: I0313 13:10:46.275898 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/2.log" Mar 13 13:10:46.346323 master-0 kubenswrapper[19715]: I0313 13:10:46.346259 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-qhg45_572e278b-c463-49b0-a198-49bd9e2c288c/console-operator/3.log" Mar 13 13:10:47.198232 master-0 kubenswrapper[19715]: I0313 13:10:47.198149 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-689d5f465d-xhncs_44b38341-ee2e-4fc0-ae38-8ad3aadb33e2/console/0.log" Mar 13 13:10:47.236976 master-0 kubenswrapper[19715]: I0313 13:10:47.236921 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-84f57b9877-nz574_a64d9c42-4a0b-472a-955a-4edab6b33210/download-server/0.log" Mar 13 13:10:47.940510 master-0 kubenswrapper[19715]: I0313 13:10:47.940451 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/1.log" Mar 13 13:10:47.958742 master-0 kubenswrapper[19715]: I0313 13:10:47.958688 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-hr4ws_b6cf4e65-37ac-4c8c-98dd-1c86ca7997f2/cluster-storage-operator/2.log" Mar 13 13:10:47.978021 master-0 kubenswrapper[19715]: I0313 13:10:47.977960 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/3.log" Mar 13 13:10:47.984338 master-0 kubenswrapper[19715]: I0313 13:10:47.984277 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-lf2dh_1464d6b1-7e9b-47a1-ab7f-8fac3ca13c53/snapshot-controller/4.log" Mar 13 13:10:48.017292 master-0 kubenswrapper[19715]: I0313 13:10:48.017202 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-77b2h_a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/csi-snapshot-controller-operator/0.log" Mar 13 13:10:48.023000 master-0 kubenswrapper[19715]: I0313 13:10:48.022963 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-77b2h_a6a45be0-19ef-4d36-b8a7-eb2705d24bfa/csi-snapshot-controller-operator/1.log" Mar 13 13:10:48.621488 master-0 kubenswrapper[19715]: I0313 13:10:48.621413 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-w7mv2_f85ab8ab-f9f1-47ad-9c96-9498cef92474/dns-operator/0.log" Mar 13 13:10:48.636206 master-0 kubenswrapper[19715]: I0313 13:10:48.636158 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-w7mv2_f85ab8ab-f9f1-47ad-9c96-9498cef92474/kube-rbac-proxy/0.log" Mar 13 13:10:49.139349 master-0 kubenswrapper[19715]: I0313 13:10:49.139288 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-qh2tf_8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/dns/0.log" Mar 13 13:10:49.155072 master-0 kubenswrapper[19715]: I0313 13:10:49.155015 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-qh2tf_8f76e22f-e2b8-40fe-be15-f87b7a4ad8f5/kube-rbac-proxy/0.log" Mar 13 13:10:49.175541 master-0 kubenswrapper[19715]: I0313 13:10:49.175472 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-5jth9_f726d662-90e1-45b9-9bba-76a9c03faced/dns-node-resolver/0.log" Mar 13 13:10:49.247353 master-0 kubenswrapper[19715]: I0313 13:10:49.247298 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-rr8h7_285a7f92-85ad-4a45-89a9-c0e6b940f766/registry-server/0.log" Mar 13 13:10:49.956017 master-0 kubenswrapper[19715]: I0313 13:10:49.955954 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-v5bfn_6e55908e-59f3-45a2-82aa-2616c5a2fd52/etcd-operator/0.log" Mar 13 13:10:49.994121 master-0 kubenswrapper[19715]: I0313 13:10:49.994042 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-v5bfn_6e55908e-59f3-45a2-82aa-2616c5a2fd52/etcd-operator/1.log" Mar 13 13:10:50.642963 master-0 kubenswrapper[19715]: I0313 13:10:50.641350 19715 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-cmnhq/perf-node-gather-daemonset-ksrsc" Mar 13 13:10:51.098423 master-0 kubenswrapper[19715]: I0313 13:10:51.098230 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 13 13:10:51.181162 master-0 kubenswrapper[19715]: I0313 13:10:51.180905 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 13 13:10:51.196090 master-0 kubenswrapper[19715]: I0313 13:10:51.196001 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 13 13:10:51.210432 master-0 kubenswrapper[19715]: I0313 13:10:51.210363 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 13 13:10:51.226178 master-0 kubenswrapper[19715]: I0313 13:10:51.226117 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 13 13:10:51.240564 master-0 kubenswrapper[19715]: I0313 13:10:51.240511 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 13 13:10:51.253849 master-0 kubenswrapper[19715]: I0313 13:10:51.253799 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 13 13:10:51.274041 master-0 kubenswrapper[19715]: I0313 13:10:51.273980 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 13 13:10:51.317120 master-0 kubenswrapper[19715]: I0313 13:10:51.317062 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_7028b88a-ef6e-47f7-bbd7-cf798efdded5/installer/0.log" Mar 13 13:10:51.384067 master-0 kubenswrapper[19715]: I0313 13:10:51.383993 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_f2ae954b-a362-4cd1-8e54-c4aedcf30a00/installer/0.log" Mar 13 13:10:52.149451 master-0 kubenswrapper[19715]: I0313 13:10:52.148956 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-cjq8f_16c2d774-967f-4964-ab4e-eb13c4364f63/cluster-image-registry-operator/0.log" Mar 13 13:10:52.157874 master-0 kubenswrapper[19715]: I0313 13:10:52.157803 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-cjq8f_16c2d774-967f-4964-ab4e-eb13c4364f63/cluster-image-registry-operator/1.log" Mar 13 13:10:52.176250 master-0 kubenswrapper[19715]: I0313 13:10:52.176210 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-kmnl4_d09c1267-3853-4ddf-8b98-2c0d8b7c845c/node-ca/0.log" Mar 13 13:10:52.807135 master-0 kubenswrapper[19715]: I0313 13:10:52.807080 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/2.log" Mar 13 13:10:52.813564 master-0 kubenswrapper[19715]: I0313 13:10:52.813524 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/ingress-operator/3.log" Mar 13 13:10:52.828745 master-0 kubenswrapper[19715]: I0313 13:10:52.828472 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-9nxcz_c1213b50-28bf-43ff-94c4-20616907735b/kube-rbac-proxy/0.log" Mar 13 13:10:53.876854 master-0 kubenswrapper[19715]: I0313 13:10:53.876795 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-ddjx7_cbd86a78-769d-4abc-b02d-48d52d9937c4/serve-healthcheck-canary/0.log" Mar 13 13:10:54.469605 master-0 kubenswrapper[19715]: I0313 13:10:54.469511 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-s4gd8_0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/insights-operator/5.log" Mar 13 13:10:54.474429 master-0 kubenswrapper[19715]: I0313 13:10:54.474381 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-s4gd8_0ecab24a-cb8c-4171-9a04-c34d1d6d71c1/insights-operator/6.log" Mar 13 13:10:56.083707 master-0 kubenswrapper[19715]: I0313 13:10:56.083624 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/alertmanager/0.log" Mar 13 13:10:56.106704 master-0 kubenswrapper[19715]: I0313 13:10:56.106647 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/config-reloader/0.log" Mar 13 13:10:56.131617 master-0 kubenswrapper[19715]: I0313 13:10:56.131538 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/kube-rbac-proxy-web/0.log" Mar 13 13:10:56.153031 master-0 kubenswrapper[19715]: I0313 13:10:56.152946 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/kube-rbac-proxy/0.log" Mar 13 13:10:56.188086 master-0 kubenswrapper[19715]: I0313 13:10:56.188029 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/kube-rbac-proxy-metric/0.log" Mar 13 13:10:56.211705 master-0 kubenswrapper[19715]: I0313 13:10:56.211660 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/prom-label-proxy/0.log" Mar 13 13:10:56.228487 master-0 kubenswrapper[19715]: I0313 13:10:56.228425 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_31f6a3b3-4e57-48bd-b40e-308ba4a2cd90/init-config-reloader/0.log" Mar 13 13:10:56.298402 master-0 kubenswrapper[19715]: I0313 13:10:56.298328 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-4jlnk_71b741d4-3899-4d31-afd1-72f5a9321f75/cluster-monitoring-operator/0.log" Mar 13 13:10:56.320192 master-0 kubenswrapper[19715]: I0313 13:10:56.320096 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-lz46x_f7495ca2-ee01-46f5-b210-5957f546270b/kube-state-metrics/0.log" Mar 13 13:10:56.342299 master-0 kubenswrapper[19715]: I0313 13:10:56.342161 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-lz46x_f7495ca2-ee01-46f5-b210-5957f546270b/kube-rbac-proxy-main/0.log" Mar 13 13:10:56.359335 master-0 kubenswrapper[19715]: I0313 13:10:56.359282 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-lz46x_f7495ca2-ee01-46f5-b210-5957f546270b/kube-rbac-proxy-self/0.log" Mar 13 13:10:56.390316 master-0 kubenswrapper[19715]: I0313 13:10:56.390260 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-6b94c647f5-cmzc9_6cca39b9-d6c1-486d-a286-6744d0a063bc/metrics-server/0.log" Mar 13 13:10:56.411634 master-0 kubenswrapper[19715]: I0313 13:10:56.411586 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-6f8b57985f-t4whs_e9b0a016-5a0f-49e5-a4f1-687da89b6408/monitoring-plugin/0.log" Mar 13 13:10:56.442383 master-0 kubenswrapper[19715]: I0313 13:10:56.442315 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-djlbx_74f20dbd-f800-4aab-8263-1bc2395c8123/node-exporter/0.log" Mar 13 13:10:56.461532 master-0 kubenswrapper[19715]: I0313 13:10:56.461471 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-djlbx_74f20dbd-f800-4aab-8263-1bc2395c8123/kube-rbac-proxy/0.log" Mar 13 13:10:56.480944 master-0 kubenswrapper[19715]: I0313 13:10:56.480876 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-djlbx_74f20dbd-f800-4aab-8263-1bc2395c8123/init-textfile/0.log" Mar 13 13:10:56.503786 master-0 kubenswrapper[19715]: I0313 13:10:56.503723 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-vwks2_6db3e185-395c-4d94-82a0-fb14978f626d/kube-rbac-proxy-main/0.log" Mar 13 13:10:56.526963 master-0 kubenswrapper[19715]: I0313 13:10:56.526908 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-vwks2_6db3e185-395c-4d94-82a0-fb14978f626d/kube-rbac-proxy-self/0.log" Mar 13 13:10:56.546537 master-0 kubenswrapper[19715]: I0313 13:10:56.546480 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-vwks2_6db3e185-395c-4d94-82a0-fb14978f626d/openshift-state-metrics/0.log" Mar 13 13:10:56.586457 master-0 kubenswrapper[19715]: I0313 13:10:56.586384 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/prometheus/0.log" Mar 13 13:10:56.611291 master-0 kubenswrapper[19715]: I0313 13:10:56.611133 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/config-reloader/0.log" Mar 13 13:10:56.626412 master-0 kubenswrapper[19715]: I0313 13:10:56.626344 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/thanos-sidecar/0.log" Mar 13 13:10:56.640906 master-0 kubenswrapper[19715]: I0313 13:10:56.640828 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/kube-rbac-proxy-web/0.log" Mar 13 13:10:56.656283 master-0 kubenswrapper[19715]: I0313 13:10:56.656217 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/kube-rbac-proxy/0.log" Mar 13 13:10:56.674837 master-0 kubenswrapper[19715]: I0313 13:10:56.674764 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/kube-rbac-proxy-thanos/0.log" Mar 13 13:10:56.694435 master-0 kubenswrapper[19715]: I0313 13:10:56.694381 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_65129feb-d231-4e3f-84a0-e769ea0b0eef/init-config-reloader/0.log" Mar 13 13:10:56.722741 master-0 kubenswrapper[19715]: I0313 13:10:56.722678 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-wb9b4_e79537b5-fbdf-419a-9148-da0433806c88/prometheus-operator/0.log" Mar 13 13:10:56.735658 master-0 kubenswrapper[19715]: I0313 13:10:56.735558 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-wb9b4_e79537b5-fbdf-419a-9148-da0433806c88/kube-rbac-proxy/0.log" Mar 13 13:10:56.756259 master-0 kubenswrapper[19715]: I0313 13:10:56.756185 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-8464df8497-6w4fc_d2f4900f-4ee7-4879-a97c-c6443d0d9acc/prometheus-operator-admission-webhook/0.log" Mar 13 13:10:56.785794 master-0 kubenswrapper[19715]: I0313 13:10:56.785732 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fb9979c45-qlpfr_badf8d0b-f96a-4919-aea5-a6510a2a2c03/telemeter-client/0.log" Mar 13 13:10:56.808994 master-0 kubenswrapper[19715]: I0313 13:10:56.808930 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fb9979c45-qlpfr_badf8d0b-f96a-4919-aea5-a6510a2a2c03/reload/0.log" Mar 13 13:10:56.853525 master-0 kubenswrapper[19715]: I0313 13:10:56.853399 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7fb9979c45-qlpfr_badf8d0b-f96a-4919-aea5-a6510a2a2c03/kube-rbac-proxy/0.log" Mar 13 13:10:56.905658 master-0 kubenswrapper[19715]: I0313 13:10:56.905567 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/thanos-query/0.log" Mar 13 13:10:56.992617 master-0 kubenswrapper[19715]: I0313 13:10:56.992538 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/kube-rbac-proxy-web/0.log" Mar 13 13:10:57.457004 master-0 kubenswrapper[19715]: I0313 13:10:57.456691 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/kube-rbac-proxy/0.log" Mar 13 13:10:57.480480 master-0 kubenswrapper[19715]: I0313 13:10:57.480364 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/prom-label-proxy/0.log" Mar 13 13:10:57.511110 master-0 kubenswrapper[19715]: I0313 13:10:57.511033 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/kube-rbac-proxy-rules/0.log" Mar 13 13:10:57.530420 master-0 kubenswrapper[19715]: I0313 13:10:57.530344 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-8fc4dc979-blhgb_d9e50ad5-6999-441a-86ef-d56e490d0d75/kube-rbac-proxy-metrics/0.log" Mar 13 13:10:57.688096 master-0 kubenswrapper[19715]: I0313 13:10:57.687424 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-94zs2_6592aa5b-4a50-40f6-80a5-87e3c547018d/kube-rbac-proxy/0.log" Mar 13 13:10:57.729559 master-0 kubenswrapper[19715]: I0313 13:10:57.727407 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-94zs2_6592aa5b-4a50-40f6-80a5-87e3c547018d/cluster-autoscaler-operator/0.log" Mar 13 13:10:57.742887 master-0 kubenswrapper[19715]: I0313 13:10:57.742765 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/2.log" Mar 13 13:10:57.744066 master-0 kubenswrapper[19715]: I0313 13:10:57.744022 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/cluster-baremetal-operator/3.log" Mar 13 13:10:57.755690 master-0 kubenswrapper[19715]: I0313 13:10:57.755309 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-hp84r_e0763043-3813-43b6-9618-b2d15c942edb/baremetal-kube-rbac-proxy/0.log" Mar 13 13:10:57.777257 master-0 kubenswrapper[19715]: I0313 13:10:57.777191 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-d7qrz_74fa8c05-2d64-4307-9fe3-1d3d69a5aa75/control-plane-machine-set-operator/0.log" Mar 13 13:10:57.798486 master-0 kubenswrapper[19715]: I0313 13:10:57.798428 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-zthfh_03758d96-5a20-4cba-92e0-47f5b1a3e558/kube-rbac-proxy/0.log" Mar 13 13:10:57.812705 master-0 kubenswrapper[19715]: I0313 13:10:57.812651 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-zthfh_03758d96-5a20-4cba-92e0-47f5b1a3e558/machine-api-operator/0.log" Mar 13 13:10:59.271758 master-0 kubenswrapper[19715]: I0313 13:10:59.271662 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/controller/0.log" Mar 13 13:10:59.289957 master-0 kubenswrapper[19715]: I0313 13:10:59.289894 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-gxqx5_5b3d5495-d012-46ed-9ccc-96ce46655060/kube-rbac-proxy/0.log" Mar 13 13:10:59.321874 master-0 kubenswrapper[19715]: I0313 13:10:59.321804 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/controller/0.log" Mar 13 13:10:59.386097 master-0 kubenswrapper[19715]: I0313 13:10:59.386019 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr/0.log" Mar 13 13:10:59.403927 master-0 kubenswrapper[19715]: I0313 13:10:59.403881 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/reloader/0.log" Mar 13 13:10:59.418887 master-0 kubenswrapper[19715]: I0313 13:10:59.418754 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/frr-metrics/0.log" Mar 13 13:10:59.437135 master-0 kubenswrapper[19715]: I0313 13:10:59.437063 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy/0.log" Mar 13 13:10:59.454326 master-0 kubenswrapper[19715]: I0313 13:10:59.454256 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/kube-rbac-proxy-frr/0.log" Mar 13 13:10:59.475600 master-0 kubenswrapper[19715]: I0313 13:10:59.475503 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-frr-files/0.log" Mar 13 13:10:59.491485 master-0 kubenswrapper[19715]: I0313 13:10:59.491403 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-reloader/0.log" Mar 13 13:10:59.508179 master-0 kubenswrapper[19715]: I0313 13:10:59.508085 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5c4fm_443d9a8a-7c66-4a0e-8d34-5307f6f1ef13/cp-metrics/0.log" Mar 13 13:10:59.534958 master-0 kubenswrapper[19715]: I0313 13:10:59.534808 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-9qbxb_4779057d-1e1c-434d-b197-5401a1bec1e8/frr-k8s-webhook-server/0.log" Mar 13 13:10:59.576561 master-0 kubenswrapper[19715]: I0313 13:10:59.576497 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6c7688d46-wm7m9_7ce08f6e-9720-4e70-bba0-f8a56161dc15/manager/0.log" Mar 13 13:10:59.596633 master-0 kubenswrapper[19715]: I0313 13:10:59.596558 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-7568db4689-9tdfv_4f6e1dd7-43c5-4906-b16f-627418cfe501/webhook-server/0.log" Mar 13 13:10:59.690882 master-0 kubenswrapper[19715]: I0313 13:10:59.690724 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/speaker/0.log" Mar 13 13:10:59.707370 master-0 kubenswrapper[19715]: I0313 13:10:59.707311 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-zlvcv_ae80375f-bbf8-4030-9cd6-f628f080116f/kube-rbac-proxy/0.log" Mar 13 13:11:00.987828 master-0 kubenswrapper[19715]: I0313 13:11:00.987669 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-mwnxf_5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/cluster-node-tuning-operator/1.log" Mar 13 13:11:00.987828 master-0 kubenswrapper[19715]: I0313 13:11:00.987771 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-mwnxf_5834a7c4-4e76-4fc7-a3ba-3ff99ee8f346/cluster-node-tuning-operator/0.log" Mar 13 13:11:01.018292 master-0 kubenswrapper[19715]: I0313 13:11:01.018247 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-d7h2t_58581675-62f2-4564-9e12-bf34551b96ac/tuned/0.log" Mar 13 13:11:02.906674 master-0 kubenswrapper[19715]: I0313 13:11:02.906615 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/2.log" Mar 13 13:11:02.966865 master-0 kubenswrapper[19715]: I0313 13:11:02.966723 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-hsrbc_684c9067-189a-4f50-ac8d-97111aa73d9c/kube-apiserver-operator/3.log" Mar 13 13:11:03.644609 master-0 kubenswrapper[19715]: I0313 13:11:03.644521 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_0feecf04-574d-4bf6-968d-77dd5c35260b/installer/0.log" Mar 13 13:11:03.671828 master-0 kubenswrapper[19715]: I0313 13:11:03.671765 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-retry-1-master-0_bc244427-5e4e-441c-a04d-f93aeca9b7c1/installer/0.log" Mar 13 13:11:03.709643 master-0 kubenswrapper[19715]: I0313 13:11:03.709553 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_cc6e9ceb-c6bf-409f-b515-b441a94db482/installer/0.log" Mar 13 13:11:03.735223 master-0 kubenswrapper[19715]: I0313 13:11:03.735156 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_139213ac-1249-40eb-853f-768a8c20f6cd/installer/0.log" Mar 13 13:11:03.773634 master-0 kubenswrapper[19715]: I0313 13:11:03.773531 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-5-master-0_7c65fa87-b404-4e2d-b730-d8e3ae5a0990/installer/0.log" Mar 13 13:11:03.929177 master-0 kubenswrapper[19715]: I0313 13:11:03.929038 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver/0.log" Mar 13 13:11:03.951532 master-0 kubenswrapper[19715]: I0313 13:11:03.949806 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-cert-syncer/0.log" Mar 13 13:11:03.974431 master-0 kubenswrapper[19715]: I0313 13:11:03.974050 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-cert-regeneration-controller/0.log" Mar 13 13:11:03.990921 master-0 kubenswrapper[19715]: I0313 13:11:03.990831 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-insecure-readyz/0.log" Mar 13 13:11:04.012625 master-0 kubenswrapper[19715]: I0313 13:11:04.012556 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/kube-apiserver-check-endpoints/0.log" Mar 13 13:11:04.029768 master-0 kubenswrapper[19715]: I0313 13:11:04.029654 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_36d4251d3504cdc0ec85144c1379056c/setup/0.log" Mar 13 13:11:04.721946 master-0 kubenswrapper[19715]: I0313 13:11:04.721881 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-kt5k7_3dbb64df-70d5-4d39-aefc-3567dc78a35a/cert-manager-controller/0.log" Mar 13 13:11:04.740315 master-0 kubenswrapper[19715]: I0313 13:11:04.740256 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-kwnsl_e4124d51-e35d-4e96-ab7c-ea9f9f031826/cert-manager-cainjector/0.log" Mar 13 13:11:04.755376 master-0 kubenswrapper[19715]: I0313 13:11:04.755324 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-vgc8q_443af3f8-080e-4540-8496-ef84da64a98e/cert-manager-webhook/0.log" Mar 13 13:11:04.960551 master-0 kubenswrapper[19715]: I0313 13:11:04.960488 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/kube-rbac-proxy/0.log" Mar 13 13:11:05.026416 master-0 kubenswrapper[19715]: I0313 13:11:05.026279 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/manager/1.log" Mar 13 13:11:05.040197 master-0 kubenswrapper[19715]: I0313 13:11:05.040160 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-lwxxn_a8c840d1-8047-4ad6-a990-3ab119ae1cc5/manager/0.log" Mar 13 13:11:05.770064 master-0 kubenswrapper[19715]: I0313 13:11:05.770013 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-kt5k7_3dbb64df-70d5-4d39-aefc-3567dc78a35a/cert-manager-controller/0.log" Mar 13 13:11:05.790154 master-0 kubenswrapper[19715]: I0313 13:11:05.790095 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-kwnsl_e4124d51-e35d-4e96-ab7c-ea9f9f031826/cert-manager-cainjector/0.log" Mar 13 13:11:05.814637 master-0 kubenswrapper[19715]: I0313 13:11:05.814567 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-vgc8q_443af3f8-080e-4540-8496-ef84da64a98e/cert-manager-webhook/0.log" Mar 13 13:11:06.961928 master-0 kubenswrapper[19715]: I0313 13:11:06.961430 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-s8ztg_a704337a-47e8-4f3e-a4c1-a3e147a67125/nmstate-console-plugin/0.log" Mar 13 13:11:06.989525 master-0 kubenswrapper[19715]: I0313 13:11:06.989450 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qmgtq_2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad/nmstate-handler/0.log" Mar 13 13:11:07.011238 master-0 kubenswrapper[19715]: I0313 13:11:07.011185 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/nmstate-metrics/0.log" Mar 13 13:11:07.032256 master-0 kubenswrapper[19715]: I0313 13:11:07.032187 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/kube-rbac-proxy/0.log" Mar 13 13:11:07.053912 master-0 kubenswrapper[19715]: I0313 13:11:07.053837 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-bsppq_42ae4c26-cb33-47a7-b53b-b88f395f06e0/nmstate-operator/0.log" Mar 13 13:11:07.081211 master-0 kubenswrapper[19715]: I0313 13:11:07.080565 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-2k9jb_189f87d2-721f-43b8-902f-a01a5187de82/nmstate-webhook/0.log" Mar 13 13:11:07.721016 master-0 kubenswrapper[19715]: I0313 13:11:07.720959 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-dvqhc_b98acd6f-01ee-4862-ba43-72fa7b00c7da/prometheus-operator/0.log" Mar 13 13:11:07.738256 master-0 kubenswrapper[19715]: I0313 13:11:07.738186 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d64756467-6lc6s_3d7479b8-c5f1-4cd7-8bab-80addabf411a/prometheus-operator-admission-webhook/0.log" Mar 13 13:11:07.768157 master-0 kubenswrapper[19715]: I0313 13:11:07.768098 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-7d64756467-c25v5_68d08c55-918b-436a-9da6-7e1998d0c415/prometheus-operator-admission-webhook/0.log" Mar 13 13:11:07.794869 master-0 kubenswrapper[19715]: I0313 13:11:07.794690 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-2dmz5_1033c510-5024-4164-89af-53acbd4dbe1c/operator/0.log" Mar 13 13:11:07.814350 master-0 kubenswrapper[19715]: I0313 13:11:07.814290 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-s2769_68f8da86-43bb-4465-b4af-701321b0d5c6/perses-operator/0.log" Mar 13 13:11:08.514775 master-0 kubenswrapper[19715]: I0313 13:11:08.514698 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-6c7r9_ffcc3a23-d81c-4064-a24a-857dbe3222c8/kube-multus/0.log" Mar 13 13:11:08.535169 master-0 kubenswrapper[19715]: I0313 13:11:08.535106 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/kube-multus-additional-cni-plugins/0.log" Mar 13 13:11:08.552768 master-0 kubenswrapper[19715]: I0313 13:11:08.552700 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/egress-router-binary-copy/0.log" Mar 13 13:11:08.569393 master-0 kubenswrapper[19715]: I0313 13:11:08.569334 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/cni-plugins/0.log" Mar 13 13:11:08.586432 master-0 kubenswrapper[19715]: I0313 13:11:08.586375 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/bond-cni-plugin/0.log" Mar 13 13:11:08.600045 master-0 kubenswrapper[19715]: I0313 13:11:08.599986 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/routeoverride-cni/0.log" Mar 13 13:11:08.616927 master-0 kubenswrapper[19715]: I0313 13:11:08.616862 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/whereabouts-cni-bincopy/0.log" Mar 13 13:11:08.632977 master-0 kubenswrapper[19715]: I0313 13:11:08.632843 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-wl6w4_6d1a0616-4479-4621-b042-36a586bd8248/whereabouts-cni/0.log" Mar 13 13:11:08.655232 master-0 kubenswrapper[19715]: I0313 13:11:08.655183 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-fntms_d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7/multus-admission-controller/0.log" Mar 13 13:11:08.672339 master-0 kubenswrapper[19715]: I0313 13:11:08.672279 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-7769569c45-fntms_d1f58cc0-8cd6-48a1-a3c5-b40a8bfeafb7/kube-rbac-proxy/0.log" Mar 13 13:11:08.703924 master-0 kubenswrapper[19715]: I0313 13:11:08.703838 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ztpxf_59c9773d-7e88-4e30-9b8a-792a869a860e/network-metrics-daemon/0.log" Mar 13 13:11:08.717234 master-0 kubenswrapper[19715]: I0313 13:11:08.717162 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-ztpxf_59c9773d-7e88-4e30-9b8a-792a869a860e/kube-rbac-proxy/0.log" Mar 13 13:11:09.250984 master-0 kubenswrapper[19715]: I0313 13:11:09.250926 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-58d685d865-s68xl_2f25cb23-6d50-470e-9f45-203d4a680f46/manager/0.log" Mar 13 13:11:09.287851 master-0 kubenswrapper[19715]: I0313 13:11:09.287788 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-w7dw4_b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb/vg-manager/1.log" Mar 13 13:11:09.288812 master-0 kubenswrapper[19715]: I0313 13:11:09.288777 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-w7dw4_b74cd071-e9bc-45f8-a7a9-d7c8c9bd1afb/vg-manager/0.log" Mar 13 13:11:10.187699 master-0 kubenswrapper[19715]: I0313 13:11:10.187640 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_aae10aa9-9c7d-4319-9829-e900af7df301/installer/0.log" Mar 13 13:11:10.210381 master-0 kubenswrapper[19715]: I0313 13:11:10.209117 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_787f8414-a607-4672-bf7f-6494b4250de1/installer/0.log" Mar 13 13:11:10.232568 master-0 kubenswrapper[19715]: I0313 13:11:10.232510 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-5-master-0_2a0e239c-fe39-43af-8b0a-2964897d8b92/installer/0.log" Mar 13 13:11:10.257106 master-0 kubenswrapper[19715]: I0313 13:11:10.257046 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-6-master-0_b1e97610-42e2-4c62-82c8-787d8a4c8a05/installer/0.log" Mar 13 13:11:10.304589 master-0 kubenswrapper[19715]: I0313 13:11:10.304507 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/1.log" Mar 13 13:11:10.452032 master-0 kubenswrapper[19715]: I0313 13:11:10.451879 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager/2.log" Mar 13 13:11:10.530308 master-0 kubenswrapper[19715]: I0313 13:11:10.530221 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/cluster-policy-controller/0.log" Mar 13 13:11:10.544262 master-0 kubenswrapper[19715]: I0313 13:11:10.544201 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager-cert-syncer/0.log" Mar 13 13:11:10.565746 master-0 kubenswrapper[19715]: I0313 13:11:10.565679 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_a38f6c36de78a5cb446093c52f21a20d/kube-controller-manager-recovery-controller/0.log" Mar 13 13:11:11.277788 master-0 kubenswrapper[19715]: I0313 13:11:11.277700 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-nwclt_0d868028-9984-472a-8403-ffed767e1bf8/kube-controller-manager-operator/0.log" Mar 13 13:11:11.293005 master-0 kubenswrapper[19715]: I0313 13:11:11.292953 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-nwclt_0d868028-9984-472a-8403-ffed767e1bf8/kube-controller-manager-operator/1.log" Mar 13 13:11:11.818603 master-0 kubenswrapper[19715]: I0313 13:11:11.818521 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-s8ztg_a704337a-47e8-4f3e-a4c1-a3e147a67125/nmstate-console-plugin/0.log" Mar 13 13:11:11.863313 master-0 kubenswrapper[19715]: I0313 13:11:11.863099 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-qmgtq_2a96e1dc-c5e8-46f2-995c-b67ccc0ef3ad/nmstate-handler/0.log" Mar 13 13:11:11.884307 master-0 kubenswrapper[19715]: I0313 13:11:11.884240 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/nmstate-metrics/0.log" Mar 13 13:11:11.899612 master-0 kubenswrapper[19715]: I0313 13:11:11.899537 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-cd9cr_5a03c104-eb50-4e42-b7df-16466c74cde4/kube-rbac-proxy/0.log" Mar 13 13:11:11.922181 master-0 kubenswrapper[19715]: I0313 13:11:11.922107 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-bsppq_42ae4c26-cb33-47a7-b53b-b88f395f06e0/nmstate-operator/0.log" Mar 13 13:11:11.942013 master-0 kubenswrapper[19715]: I0313 13:11:11.941964 19715 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-2k9jb_189f87d2-721f-43b8-902f-a01a5187de82/nmstate-webhook/0.log"